Test Report: KVM_Linux_crio 18424

                    
                      1ff1985e433cf64121c1d5b23135320107f58df6:2024-10-07:36542
                    
                

Test fail (32/318)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.69
35 TestAddons/parallel/Ingress 152.36
37 TestAddons/parallel/MetricsServer 321.7
45 TestAddons/StoppedEnableDisable 154.31
164 TestMultiControlPlane/serial/StopSecondaryNode 141.84
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.81
166 TestMultiControlPlane/serial/RestartSecondaryNode 6.54
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.3
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 360.33
171 TestMultiControlPlane/serial/StopCluster 142.11
231 TestMultiNode/serial/RestartKeepsNodes 319.7
233 TestMultiNode/serial/StopMultiNode 145.5
240 TestPreload 164.71
248 TestKubernetesUpgrade 398.93
291 TestStartStop/group/old-k8s-version/serial/FirstStart 273.71
298 TestStartStop/group/no-preload/serial/Stop 139.24
301 TestStartStop/group/embed-certs/serial/Stop 139.22
302 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 102.51
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
312 TestStartStop/group/old-k8s-version/serial/SecondStart 752.15
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.03
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.83
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.81
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.05
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.24
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 441.43
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 290.16
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 124.66
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 179.1
x
+
TestAddons/serial/GCPAuth/PullSecret (480.69s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-054971 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-054971 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [25d5204e-dbd2-40d4-8608-1c35f98a64d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-054971 -n addons-054971
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-07 12:18:14.02313448 +0000 UTC m=+627.221674472
addons_test.go:627: (dbg) Run:  kubectl --context addons-054971 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-054971 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-054971/192.168.39.62
Start Time:       Mon, 07 Oct 2024 12:10:13 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.22
IPs:
IP:  10.244.0.22
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sbklt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sbklt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/busybox to addons-054971
Normal   Pulling    6m40s (x4 over 8m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m40s (x4 over 8m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m40s (x4 over 8m)   kubelet            Error: ErrImagePull
Warning  Failed     6m17s (x6 over 8m)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m46s (x21 over 8m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-054971 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-054971 logs busybox -n default: exit status 1 (75.553976ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-054971 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-054971 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-054971 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-054971 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cca3b2af-dfec-4a2d-99be-b6c1e43f30f7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cca3b2af-dfec-4a2d-99be-b6c1e43f30f7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00682243s
I1007 12:18:51.914454  754324 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-054971 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.813153819s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-054971 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.62
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-054971 -n addons-054971
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 logs -n 25: (1.551610099s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| delete  | -p download-only-478522                                                                     | download-only-478522 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| delete  | -p download-only-096310                                                                     | download-only-096310 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| delete  | -p download-only-478522                                                                     | download-only-478522 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-969518 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | binary-mirror-969518                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40857                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-969518                                                                     | binary-mirror-969518 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| addons  | enable dashboard -p                                                                         | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | addons-054971                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | addons-054971                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-054971 --wait=true                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-054971 ip                                                                            | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-054971 ssh curl -s                                                                   | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-054971 ssh cat                                                                       | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | /opt/local-path-provisioner/pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:19 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | -p addons-054971                                                                            |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | -p addons-054971                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-054971 ip                                                                            | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:21 UTC | 07 Oct 24 12:21 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:07:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:07:59.572642  754935 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:07:59.572806  754935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:59.572819  754935 out.go:358] Setting ErrFile to fd 2...
	I1007 12:07:59.572826  754935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:59.573017  754935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:07:59.573657  754935 out.go:352] Setting JSON to false
	I1007 12:07:59.574654  754935 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6629,"bootTime":1728296251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:07:59.574784  754935 start.go:139] virtualization: kvm guest
	I1007 12:07:59.577043  754935 out.go:177] * [addons-054971] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:07:59.578505  754935 notify.go:220] Checking for updates...
	I1007 12:07:59.578532  754935 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:07:59.580178  754935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:07:59.581611  754935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:07:59.582882  754935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:07:59.584387  754935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:07:59.585594  754935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:07:59.587047  754935 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:07:59.622808  754935 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:07:59.624975  754935 start.go:297] selected driver: kvm2
	I1007 12:07:59.625004  754935 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:07:59.625038  754935 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:07:59.625817  754935 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:07:59.625911  754935 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:07:59.642331  754935 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:07:59.642394  754935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:07:59.642668  754935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:07:59.642704  754935 cni.go:84] Creating CNI manager for ""
	I1007 12:07:59.642757  754935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:07:59.642785  754935 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 12:07:59.642857  754935 start.go:340] cluster config:
	{Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:59.642973  754935 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:07:59.645394  754935 out.go:177] * Starting "addons-054971" primary control-plane node in "addons-054971" cluster
	I1007 12:07:59.647155  754935 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:59.647239  754935 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:07:59.647253  754935 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:59.647371  754935 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:07:59.647386  754935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:07:59.647752  754935 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/config.json ...
	I1007 12:07:59.647782  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/config.json: {Name:mka4931e420d409240060afe28d91b99168dee52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:59.647962  754935 start.go:360] acquireMachinesLock for addons-054971: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:07:59.648043  754935 start.go:364] duration metric: took 60.101µs to acquireMachinesLock for "addons-054971"
	I1007 12:07:59.648073  754935 start.go:93] Provisioning new machine with config: &{Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:59.648138  754935 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:07:59.650270  754935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 12:07:59.650444  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:59.650514  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:59.665985  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I1007 12:07:59.666530  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:59.667183  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:07:59.667229  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:59.667719  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:59.667995  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:07:59.668183  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:07:59.668424  754935 start.go:159] libmachine.API.Create for "addons-054971" (driver="kvm2")
	I1007 12:07:59.668463  754935 client.go:168] LocalClient.Create starting
	I1007 12:07:59.668515  754935 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:07:59.806192  754935 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:08:00.082840  754935 main.go:141] libmachine: Running pre-create checks...
	I1007 12:08:00.082875  754935 main.go:141] libmachine: (addons-054971) Calling .PreCreateCheck
	I1007 12:08:00.083462  754935 main.go:141] libmachine: (addons-054971) Calling .GetConfigRaw
	I1007 12:08:00.083962  754935 main.go:141] libmachine: Creating machine...
	I1007 12:08:00.083991  754935 main.go:141] libmachine: (addons-054971) Calling .Create
	I1007 12:08:00.084174  754935 main.go:141] libmachine: (addons-054971) Creating KVM machine...
	I1007 12:08:00.085613  754935 main.go:141] libmachine: (addons-054971) DBG | found existing default KVM network
	I1007 12:08:00.086673  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.086483  754957 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I1007 12:08:00.086751  754935 main.go:141] libmachine: (addons-054971) DBG | created network xml: 
	I1007 12:08:00.086777  754935 main.go:141] libmachine: (addons-054971) DBG | <network>
	I1007 12:08:00.086791  754935 main.go:141] libmachine: (addons-054971) DBG |   <name>mk-addons-054971</name>
	I1007 12:08:00.086804  754935 main.go:141] libmachine: (addons-054971) DBG |   <dns enable='no'/>
	I1007 12:08:00.086816  754935 main.go:141] libmachine: (addons-054971) DBG |   
	I1007 12:08:00.086831  754935 main.go:141] libmachine: (addons-054971) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:08:00.086846  754935 main.go:141] libmachine: (addons-054971) DBG |     <dhcp>
	I1007 12:08:00.086855  754935 main.go:141] libmachine: (addons-054971) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:08:00.086861  754935 main.go:141] libmachine: (addons-054971) DBG |     </dhcp>
	I1007 12:08:00.086868  754935 main.go:141] libmachine: (addons-054971) DBG |   </ip>
	I1007 12:08:00.086873  754935 main.go:141] libmachine: (addons-054971) DBG |   
	I1007 12:08:00.086879  754935 main.go:141] libmachine: (addons-054971) DBG | </network>
	I1007 12:08:00.086889  754935 main.go:141] libmachine: (addons-054971) DBG | 
	I1007 12:08:00.092680  754935 main.go:141] libmachine: (addons-054971) DBG | trying to create private KVM network mk-addons-054971 192.168.39.0/24...
	I1007 12:08:00.164246  754935 main.go:141] libmachine: (addons-054971) DBG | private KVM network mk-addons-054971 192.168.39.0/24 created
	I1007 12:08:00.164284  754935 main.go:141] libmachine: (addons-054971) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971 ...
	I1007 12:08:00.164310  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.164175  754957 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:08:00.164328  754935 main.go:141] libmachine: (addons-054971) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:08:00.164348  754935 main.go:141] libmachine: (addons-054971) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:08:00.437829  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.437643  754957 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa...
	I1007 12:08:00.654995  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.654793  754957 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/addons-054971.rawdisk...
	I1007 12:08:00.655033  754935 main.go:141] libmachine: (addons-054971) DBG | Writing magic tar header
	I1007 12:08:00.655050  754935 main.go:141] libmachine: (addons-054971) DBG | Writing SSH key tar header
	I1007 12:08:00.655061  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.654922  754957 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971 ...
	I1007 12:08:00.655075  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971
	I1007 12:08:00.655082  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:08:00.655091  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:08:00.655097  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:08:00.655107  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:08:00.655116  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:08:00.655126  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home
	I1007 12:08:00.655137  754935 main.go:141] libmachine: (addons-054971) DBG | Skipping /home - not owner
	I1007 12:08:00.655153  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971 (perms=drwx------)
	I1007 12:08:00.655162  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:08:00.655172  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:08:00.655184  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:08:00.655217  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:08:00.655237  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:08:00.655246  754935 main.go:141] libmachine: (addons-054971) Creating domain...
	I1007 12:08:00.656395  754935 main.go:141] libmachine: (addons-054971) define libvirt domain using xml: 
	I1007 12:08:00.656430  754935 main.go:141] libmachine: (addons-054971) <domain type='kvm'>
	I1007 12:08:00.656439  754935 main.go:141] libmachine: (addons-054971)   <name>addons-054971</name>
	I1007 12:08:00.656445  754935 main.go:141] libmachine: (addons-054971)   <memory unit='MiB'>4000</memory>
	I1007 12:08:00.656451  754935 main.go:141] libmachine: (addons-054971)   <vcpu>2</vcpu>
	I1007 12:08:00.656455  754935 main.go:141] libmachine: (addons-054971)   <features>
	I1007 12:08:00.656460  754935 main.go:141] libmachine: (addons-054971)     <acpi/>
	I1007 12:08:00.656466  754935 main.go:141] libmachine: (addons-054971)     <apic/>
	I1007 12:08:00.656471  754935 main.go:141] libmachine: (addons-054971)     <pae/>
	I1007 12:08:00.656481  754935 main.go:141] libmachine: (addons-054971)     
	I1007 12:08:00.656486  754935 main.go:141] libmachine: (addons-054971)   </features>
	I1007 12:08:00.656496  754935 main.go:141] libmachine: (addons-054971)   <cpu mode='host-passthrough'>
	I1007 12:08:00.656534  754935 main.go:141] libmachine: (addons-054971)   
	I1007 12:08:00.656562  754935 main.go:141] libmachine: (addons-054971)   </cpu>
	I1007 12:08:00.656586  754935 main.go:141] libmachine: (addons-054971)   <os>
	I1007 12:08:00.656603  754935 main.go:141] libmachine: (addons-054971)     <type>hvm</type>
	I1007 12:08:00.656610  754935 main.go:141] libmachine: (addons-054971)     <boot dev='cdrom'/>
	I1007 12:08:00.656615  754935 main.go:141] libmachine: (addons-054971)     <boot dev='hd'/>
	I1007 12:08:00.656621  754935 main.go:141] libmachine: (addons-054971)     <bootmenu enable='no'/>
	I1007 12:08:00.656627  754935 main.go:141] libmachine: (addons-054971)   </os>
	I1007 12:08:00.656632  754935 main.go:141] libmachine: (addons-054971)   <devices>
	I1007 12:08:00.656638  754935 main.go:141] libmachine: (addons-054971)     <disk type='file' device='cdrom'>
	I1007 12:08:00.656646  754935 main.go:141] libmachine: (addons-054971)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/boot2docker.iso'/>
	I1007 12:08:00.656653  754935 main.go:141] libmachine: (addons-054971)       <target dev='hdc' bus='scsi'/>
	I1007 12:08:00.656658  754935 main.go:141] libmachine: (addons-054971)       <readonly/>
	I1007 12:08:00.656665  754935 main.go:141] libmachine: (addons-054971)     </disk>
	I1007 12:08:00.656674  754935 main.go:141] libmachine: (addons-054971)     <disk type='file' device='disk'>
	I1007 12:08:00.656686  754935 main.go:141] libmachine: (addons-054971)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:08:00.656695  754935 main.go:141] libmachine: (addons-054971)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/addons-054971.rawdisk'/>
	I1007 12:08:00.656702  754935 main.go:141] libmachine: (addons-054971)       <target dev='hda' bus='virtio'/>
	I1007 12:08:00.656706  754935 main.go:141] libmachine: (addons-054971)     </disk>
	I1007 12:08:00.656713  754935 main.go:141] libmachine: (addons-054971)     <interface type='network'>
	I1007 12:08:00.656734  754935 main.go:141] libmachine: (addons-054971)       <source network='mk-addons-054971'/>
	I1007 12:08:00.656741  754935 main.go:141] libmachine: (addons-054971)       <model type='virtio'/>
	I1007 12:08:00.656747  754935 main.go:141] libmachine: (addons-054971)     </interface>
	I1007 12:08:00.656755  754935 main.go:141] libmachine: (addons-054971)     <interface type='network'>
	I1007 12:08:00.656771  754935 main.go:141] libmachine: (addons-054971)       <source network='default'/>
	I1007 12:08:00.656787  754935 main.go:141] libmachine: (addons-054971)       <model type='virtio'/>
	I1007 12:08:00.656801  754935 main.go:141] libmachine: (addons-054971)     </interface>
	I1007 12:08:00.656817  754935 main.go:141] libmachine: (addons-054971)     <serial type='pty'>
	I1007 12:08:00.656829  754935 main.go:141] libmachine: (addons-054971)       <target port='0'/>
	I1007 12:08:00.656838  754935 main.go:141] libmachine: (addons-054971)     </serial>
	I1007 12:08:00.656858  754935 main.go:141] libmachine: (addons-054971)     <console type='pty'>
	I1007 12:08:00.656866  754935 main.go:141] libmachine: (addons-054971)       <target type='serial' port='0'/>
	I1007 12:08:00.656871  754935 main.go:141] libmachine: (addons-054971)     </console>
	I1007 12:08:00.656875  754935 main.go:141] libmachine: (addons-054971)     <rng model='virtio'>
	I1007 12:08:00.656884  754935 main.go:141] libmachine: (addons-054971)       <backend model='random'>/dev/random</backend>
	I1007 12:08:00.656889  754935 main.go:141] libmachine: (addons-054971)     </rng>
	I1007 12:08:00.656894  754935 main.go:141] libmachine: (addons-054971)     
	I1007 12:08:00.656900  754935 main.go:141] libmachine: (addons-054971)     
	I1007 12:08:00.656913  754935 main.go:141] libmachine: (addons-054971)   </devices>
	I1007 12:08:00.656925  754935 main.go:141] libmachine: (addons-054971) </domain>
	I1007 12:08:00.656940  754935 main.go:141] libmachine: (addons-054971) 
	I1007 12:08:00.663302  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:f5:15:6e in network default
	I1007 12:08:00.663783  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:00.663812  754935 main.go:141] libmachine: (addons-054971) Ensuring networks are active...
	I1007 12:08:00.664547  754935 main.go:141] libmachine: (addons-054971) Ensuring network default is active
	I1007 12:08:00.664921  754935 main.go:141] libmachine: (addons-054971) Ensuring network mk-addons-054971 is active
	I1007 12:08:00.665479  754935 main.go:141] libmachine: (addons-054971) Getting domain xml...
	I1007 12:08:00.666246  754935 main.go:141] libmachine: (addons-054971) Creating domain...
	I1007 12:08:01.210389  754935 main.go:141] libmachine: (addons-054971) Waiting to get IP...
	I1007 12:08:01.211322  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:01.211807  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:01.211832  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:01.211782  754957 retry.go:31] will retry after 302.532395ms: waiting for machine to come up
	I1007 12:08:01.516145  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:01.516598  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:01.516671  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:01.516534  754957 retry.go:31] will retry after 235.273407ms: waiting for machine to come up
	I1007 12:08:01.752903  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:01.753322  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:01.753352  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:01.753281  754957 retry.go:31] will retry after 339.470407ms: waiting for machine to come up
	I1007 12:08:02.095125  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:02.095554  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:02.095586  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:02.095501  754957 retry.go:31] will retry after 563.14845ms: waiting for machine to come up
	I1007 12:08:02.660208  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:02.660689  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:02.660715  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:02.660627  754957 retry.go:31] will retry after 525.569187ms: waiting for machine to come up
	I1007 12:08:03.187514  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:03.188033  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:03.188059  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:03.187980  754957 retry.go:31] will retry after 625.441425ms: waiting for machine to come up
	I1007 12:08:03.814765  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:03.815125  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:03.815148  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:03.815093  754957 retry.go:31] will retry after 741.448412ms: waiting for machine to come up
	I1007 12:08:04.558071  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:04.558559  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:04.558583  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:04.558499  754957 retry.go:31] will retry after 1.166707702s: waiting for machine to come up
	I1007 12:08:05.727215  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:05.728021  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:05.728067  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:05.727899  754957 retry.go:31] will retry after 1.558030288s: waiting for machine to come up
	I1007 12:08:07.287788  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:07.288772  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:07.289184  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:07.288566  754957 retry.go:31] will retry after 2.291932799s: waiting for machine to come up
	I1007 12:08:09.583293  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:09.583766  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:09.583885  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:09.583815  754957 retry.go:31] will retry after 2.102395553s: waiting for machine to come up
	I1007 12:08:11.688800  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:11.689284  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:11.689303  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:11.689222  754957 retry.go:31] will retry after 2.844478116s: waiting for machine to come up
	I1007 12:08:14.537542  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:14.537949  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:14.537968  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:14.537895  754957 retry.go:31] will retry after 4.101176697s: waiting for machine to come up
	I1007 12:08:18.644021  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:18.644418  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:18.644444  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:18.644366  754957 retry.go:31] will retry after 3.901511536s: waiting for machine to come up
	I1007 12:08:22.549411  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.550012  754935 main.go:141] libmachine: (addons-054971) Found IP for machine: 192.168.39.62
	I1007 12:08:22.550071  754935 main.go:141] libmachine: (addons-054971) Reserving static IP address...
	I1007 12:08:22.550089  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has current primary IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.550441  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find host DHCP lease matching {name: "addons-054971", mac: "52:54:00:06:35:95", ip: "192.168.39.62"} in network mk-addons-054971
	I1007 12:08:22.741070  754935 main.go:141] libmachine: (addons-054971) DBG | Getting to WaitForSSH function...
	I1007 12:08:22.741107  754935 main.go:141] libmachine: (addons-054971) Reserved static IP address: 192.168.39.62
	I1007 12:08:22.741120  754935 main.go:141] libmachine: (addons-054971) Waiting for SSH to be available...
	I1007 12:08:22.743956  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.744432  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:35:95}
	I1007 12:08:22.744480  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.744644  754935 main.go:141] libmachine: (addons-054971) DBG | Using SSH client type: external
	I1007 12:08:22.744670  754935 main.go:141] libmachine: (addons-054971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa (-rw-------)
	I1007 12:08:22.744700  754935 main.go:141] libmachine: (addons-054971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:08:22.744713  754935 main.go:141] libmachine: (addons-054971) DBG | About to run SSH command:
	I1007 12:08:22.744725  754935 main.go:141] libmachine: (addons-054971) DBG | exit 0
	I1007 12:08:22.874506  754935 main.go:141] libmachine: (addons-054971) DBG | SSH cmd err, output: <nil>: 
	I1007 12:08:22.874685  754935 main.go:141] libmachine: (addons-054971) KVM machine creation complete!
	I1007 12:08:22.875347  754935 main.go:141] libmachine: (addons-054971) Calling .GetConfigRaw
	I1007 12:08:22.908849  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:22.909495  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:22.909843  754935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:08:22.909870  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:22.911317  754935 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:08:22.911338  754935 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:08:22.911344  754935 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:08:22.911350  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:22.914176  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.914698  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:35:95}
	I1007 12:08:22.914747  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.914990  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:22.915265  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:22.915477  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:22.915678  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:22.915890  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:22.916127  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:22.916142  754935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:08:23.029880  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:08:23.029908  754935 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:08:23.029916  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.033178  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.033588  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.033612  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.033819  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.034077  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.034262  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.034431  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.034592  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:23.034801  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:23.034815  754935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:08:23.151341  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:08:23.151412  754935 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:08:23.151419  754935 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:08:23.151430  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:08:23.151732  754935 buildroot.go:166] provisioning hostname "addons-054971"
	I1007 12:08:23.151768  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:08:23.151990  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.154694  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.155012  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.155052  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.155246  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.155430  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.155588  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.155729  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.155898  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:23.156077  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:23.156090  754935 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-054971 && echo "addons-054971" | sudo tee /etc/hostname
	I1007 12:08:23.285340  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-054971
	
	I1007 12:08:23.285375  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.288360  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.288768  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.288798  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.288999  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.289211  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.289383  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.289526  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.289704  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:23.289895  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:23.289910  754935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-054971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-054971/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-054971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:08:23.416271  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:08:23.416312  754935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:08:23.416380  754935 buildroot.go:174] setting up certificates
	I1007 12:08:23.416404  754935 provision.go:84] configureAuth start
	I1007 12:08:23.416427  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:08:23.416841  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:23.419388  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.419711  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.419742  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.419875  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.422119  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.422421  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.422449  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.422580  754935 provision.go:143] copyHostCerts
	I1007 12:08:23.422691  754935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:08:23.422857  754935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:08:23.422947  754935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:08:23.423029  754935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.addons-054971 san=[127.0.0.1 192.168.39.62 addons-054971 localhost minikube]
	I1007 12:08:23.850763  754935 provision.go:177] copyRemoteCerts
	I1007 12:08:23.850838  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:08:23.850865  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.853646  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.854185  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.854220  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.854413  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.854607  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.854752  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.855039  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:23.941420  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:08:23.969069  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:08:23.995784  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:08:24.021483  754935 provision.go:87] duration metric: took 605.054524ms to configureAuth
	I1007 12:08:24.021519  754935 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:08:24.021712  754935 config.go:182] Loaded profile config "addons-054971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:24.021794  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.024445  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.024732  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.024752  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.024944  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.025142  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.025329  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.025502  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.025658  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:24.025871  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:24.025887  754935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:08:24.266440  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:08:24.266472  754935 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:08:24.266482  754935 main.go:141] libmachine: (addons-054971) Calling .GetURL
	I1007 12:08:24.268085  754935 main.go:141] libmachine: (addons-054971) DBG | Using libvirt version 6000000
	I1007 12:08:24.270308  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.270671  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.270702  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.270899  754935 main.go:141] libmachine: Docker is up and running!
	I1007 12:08:24.270913  754935 main.go:141] libmachine: Reticulating splines...
	I1007 12:08:24.270921  754935 client.go:171] duration metric: took 24.602447605s to LocalClient.Create
	I1007 12:08:24.270945  754935 start.go:167] duration metric: took 24.602524604s to libmachine.API.Create "addons-054971"
	I1007 12:08:24.270965  754935 start.go:293] postStartSetup for "addons-054971" (driver="kvm2")
	I1007 12:08:24.270977  754935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:08:24.270995  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.271292  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:08:24.271322  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.273234  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.273548  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.273574  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.273712  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.273887  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.274077  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.274209  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:24.360828  754935 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:08:24.365389  754935 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:08:24.365445  754935 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:08:24.365532  754935 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:08:24.365567  754935 start.go:296] duration metric: took 94.594256ms for postStartSetup
	I1007 12:08:24.365620  754935 main.go:141] libmachine: (addons-054971) Calling .GetConfigRaw
	I1007 12:08:24.366234  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:24.369106  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.369474  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.369502  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.369750  754935 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/config.json ...
	I1007 12:08:24.369975  754935 start.go:128] duration metric: took 24.72182471s to createHost
	I1007 12:08:24.370004  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.372113  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.372404  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.372443  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.372589  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.372781  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.372944  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.373081  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.373250  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:24.373420  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:24.373430  754935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:08:24.487069  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302904.462104795
	
	I1007 12:08:24.487097  754935 fix.go:216] guest clock: 1728302904.462104795
	I1007 12:08:24.487105  754935 fix.go:229] Guest: 2024-10-07 12:08:24.462104795 +0000 UTC Remote: 2024-10-07 12:08:24.369989566 +0000 UTC m=+24.839624309 (delta=92.115229ms)
	I1007 12:08:24.487154  754935 fix.go:200] guest clock delta is within tolerance: 92.115229ms
	I1007 12:08:24.487164  754935 start.go:83] releasing machines lock for "addons-054971", held for 24.839104324s
	I1007 12:08:24.487194  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.487488  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:24.490137  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.490612  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.490640  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.490816  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.491321  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.491483  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.491592  754935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:08:24.491649  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.491796  754935 ssh_runner.go:195] Run: cat /version.json
	I1007 12:08:24.491831  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.494734  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495061  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.495087  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495107  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495305  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.495530  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.495609  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.495634  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495696  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.495771  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.495843  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:24.495881  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.496097  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.496285  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:24.607711  754935 ssh_runner.go:195] Run: systemctl --version
	I1007 12:08:24.613966  754935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:08:24.774833  754935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:08:24.781653  754935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:08:24.781735  754935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:08:24.799429  754935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:08:24.799461  754935 start.go:495] detecting cgroup driver to use...
	I1007 12:08:24.799550  754935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:08:24.816749  754935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:08:24.832373  754935 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:08:24.832448  754935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:08:24.847340  754935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:08:24.862121  754935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:08:24.974702  754935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:08:25.133182  754935 docker.go:233] disabling docker service ...
	I1007 12:08:25.133259  754935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:08:25.148190  754935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:08:25.161503  754935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:08:25.302582  754935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:08:25.415236  754935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:08:25.430690  754935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:08:25.450234  754935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:08:25.450304  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.461363  754935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:08:25.461533  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.472443  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.483633  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.494682  754935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:08:25.505823  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.517153  754935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.536034  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.547258  754935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:08:25.557100  754935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:08:25.557175  754935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:08:25.571038  754935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:08:25.581065  754935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:25.702234  754935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:08:25.796548  754935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:08:25.796660  754935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:08:25.801839  754935 start.go:563] Will wait 60s for crictl version
	I1007 12:08:25.801921  754935 ssh_runner.go:195] Run: which crictl
	I1007 12:08:25.806239  754935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:08:25.850119  754935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:08:25.850233  754935 ssh_runner.go:195] Run: crio --version
	I1007 12:08:25.882752  754935 ssh_runner.go:195] Run: crio --version
	I1007 12:08:25.913822  754935 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:08:25.915342  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:25.918204  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:25.918593  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:25.918625  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:25.918910  754935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:08:25.923594  754935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:25.937482  754935 kubeadm.go:883] updating cluster {Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:08:25.937608  754935 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:08:25.937653  754935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:08:25.973328  754935 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:08:25.973400  754935 ssh_runner.go:195] Run: which lz4
	I1007 12:08:25.977586  754935 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:08:25.981791  754935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:08:25.981853  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:08:27.380108  754935 crio.go:462] duration metric: took 1.402551401s to copy over tarball
	I1007 12:08:27.380215  754935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:08:29.599799  754935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219548429s)
	I1007 12:08:29.599842  754935 crio.go:469] duration metric: took 2.219698523s to extract the tarball
	I1007 12:08:29.599852  754935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:08:29.639177  754935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:08:29.685454  754935 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:08:29.685490  754935 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:08:29.685501  754935 kubeadm.go:934] updating node { 192.168.39.62 8443 v1.31.1 crio true true} ...
	I1007 12:08:29.685632  754935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-054971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:08:29.685731  754935 ssh_runner.go:195] Run: crio config
	I1007 12:08:29.740722  754935 cni.go:84] Creating CNI manager for ""
	I1007 12:08:29.740750  754935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:08:29.740762  754935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:08:29.740784  754935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-054971 NodeName:addons-054971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:08:29.740945  754935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-054971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:08:29.741024  754935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:29.752821  754935 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:08:29.752909  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:08:29.764740  754935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 12:08:29.783575  754935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:08:29.802470  754935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1007 12:08:29.820581  754935 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I1007 12:08:29.825059  754935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:29.839011  754935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:29.978306  754935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:29.996730  754935 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971 for IP: 192.168.39.62
	I1007 12:08:29.996769  754935 certs.go:194] generating shared ca certs ...
	I1007 12:08:29.996789  754935 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:29.996986  754935 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:08:30.125391  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt ...
	I1007 12:08:30.125430  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt: {Name:mkf38bf1f27b36c5a90d408329bd80f1d68bbecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.125621  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key ...
	I1007 12:08:30.125632  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key: {Name:mk168e4f92eadd0196eca20db6f9ccfcf5db1621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.125715  754935 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:08:30.305758  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt ...
	I1007 12:08:30.305792  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt: {Name:mk56fe9616efe3c3bc3e1ceda5b49e5b20b43e6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.305969  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key ...
	I1007 12:08:30.305980  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key: {Name:mk47f918245deed16906815c0d30c35fb7007064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.306077  754935 certs.go:256] generating profile certs ...
	I1007 12:08:30.306148  754935 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.key
	I1007 12:08:30.306163  754935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt with IP's: []
	I1007 12:08:30.633236  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt ...
	I1007 12:08:30.633273  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: {Name:mk2af063631c68299ee0f188c8248df6f07e8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.633453  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.key ...
	I1007 12:08:30.633464  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.key: {Name:mkfe775678202cd58fcf06ea7b26ad5560d3a483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.633532  754935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073
	I1007 12:08:30.633551  754935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.62]
	I1007 12:08:30.889463  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073 ...
	I1007 12:08:30.889499  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073: {Name:mk9ac46c8c2cfd9cc90be39a3d6acc574fb18e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.889675  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073 ...
	I1007 12:08:30.889688  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073: {Name:mkdc25b5cb3b50e13806fd559153de2005948061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.889761  754935 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt
	I1007 12:08:30.889860  754935 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key
	I1007 12:08:30.889912  754935 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key
	I1007 12:08:30.889931  754935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt with IP's: []
	I1007 12:08:30.983559  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt ...
	I1007 12:08:30.983594  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt: {Name:mkad2e0848c9219bce5e94cbee1000568da3bb8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.983782  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key ...
	I1007 12:08:30.983796  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key: {Name:mk7e53181e8b50324583479ecc40043bfdc3782e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.983965  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:08:30.984002  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:08:30.984024  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:08:30.984048  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:08:30.984838  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:08:31.018116  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:08:31.045859  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:08:31.074096  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:08:31.101386  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 12:08:31.127326  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:08:31.152967  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:08:31.183365  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 12:08:31.211557  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:08:31.238242  754935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:08:31.257072  754935 ssh_runner.go:195] Run: openssl version
	I1007 12:08:31.263620  754935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:08:31.275430  754935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:31.280847  754935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:31.280927  754935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:31.287409  754935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:08:31.299175  754935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:08:31.303993  754935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:08:31.304061  754935 kubeadm.go:392] StartCluster: {Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:08:31.304158  754935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:08:31.304227  754935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:08:31.344425  754935 cri.go:89] found id: ""
	I1007 12:08:31.344513  754935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:08:31.355234  754935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:08:31.367359  754935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:08:31.377623  754935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:08:31.377649  754935 kubeadm.go:157] found existing configuration files:
	
	I1007 12:08:31.377706  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:08:31.387122  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:08:31.387188  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:08:31.397541  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:08:31.410453  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:08:31.410607  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:08:31.421318  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:08:31.431331  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:08:31.431395  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:08:31.441975  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:08:31.452515  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:08:31.452580  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:08:31.462994  754935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:08:31.521057  754935 kubeadm.go:310] W1007 12:08:31.504124     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:08:31.521747  754935 kubeadm.go:310] W1007 12:08:31.505103     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:08:31.648302  754935 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:08:42.239304  754935 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:08:42.239394  754935 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:08:42.239516  754935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:08:42.239664  754935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:08:42.239780  754935 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:08:42.239840  754935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:08:42.241259  754935 out.go:235]   - Generating certificates and keys ...
	I1007 12:08:42.241353  754935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:08:42.241425  754935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:08:42.241497  754935 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:08:42.241550  754935 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:08:42.241601  754935 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:08:42.241653  754935 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:08:42.241699  754935 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:08:42.241819  754935 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-054971 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1007 12:08:42.241914  754935 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:08:42.242108  754935 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-054971 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1007 12:08:42.242192  754935 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:08:42.242275  754935 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:08:42.242337  754935 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:08:42.242427  754935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:08:42.242499  754935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:08:42.242553  754935 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:08:42.242616  754935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:08:42.242689  754935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:08:42.242763  754935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:08:42.242852  754935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:08:42.242949  754935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:08:42.244275  754935 out.go:235]   - Booting up control plane ...
	I1007 12:08:42.244365  754935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:08:42.244436  754935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:08:42.244531  754935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:08:42.244703  754935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:08:42.244792  754935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:08:42.244826  754935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:08:42.244940  754935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:08:42.245051  754935 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:08:42.245142  754935 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001601827s
	I1007 12:08:42.245240  754935 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:08:42.245316  754935 kubeadm.go:310] [api-check] The API server is healthy after 5.503726284s
	I1007 12:08:42.245443  754935 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:08:42.245592  754935 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:08:42.245666  754935 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:08:42.245971  754935 kubeadm.go:310] [mark-control-plane] Marking the node addons-054971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:08:42.246075  754935 kubeadm.go:310] [bootstrap-token] Using token: hpfhac.k0ed3mhw422jku3i
	I1007 12:08:42.247396  754935 out.go:235]   - Configuring RBAC rules ...
	I1007 12:08:42.247499  754935 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:08:42.247572  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:08:42.247711  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:08:42.247816  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:08:42.247916  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:08:42.248014  754935 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:08:42.248150  754935 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:08:42.248229  754935 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:08:42.248297  754935 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:08:42.248306  754935 kubeadm.go:310] 
	I1007 12:08:42.248386  754935 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:08:42.248396  754935 kubeadm.go:310] 
	I1007 12:08:42.248494  754935 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:08:42.248503  754935 kubeadm.go:310] 
	I1007 12:08:42.248524  754935 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:08:42.248579  754935 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:08:42.248622  754935 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:08:42.248631  754935 kubeadm.go:310] 
	I1007 12:08:42.248701  754935 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:08:42.248711  754935 kubeadm.go:310] 
	I1007 12:08:42.248750  754935 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:08:42.248754  754935 kubeadm.go:310] 
	I1007 12:08:42.248844  754935 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:08:42.248919  754935 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:08:42.249011  754935 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:08:42.249030  754935 kubeadm.go:310] 
	I1007 12:08:42.249168  754935 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:08:42.249267  754935 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:08:42.249277  754935 kubeadm.go:310] 
	I1007 12:08:42.249486  754935 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hpfhac.k0ed3mhw422jku3i \
	I1007 12:08:42.249624  754935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 12:08:42.249655  754935 kubeadm.go:310] 	--control-plane 
	I1007 12:08:42.249662  754935 kubeadm.go:310] 
	I1007 12:08:42.249741  754935 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:08:42.249758  754935 kubeadm.go:310] 
	I1007 12:08:42.249867  754935 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hpfhac.k0ed3mhw422jku3i \
	I1007 12:08:42.250007  754935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 12:08:42.250069  754935 cni.go:84] Creating CNI manager for ""
	I1007 12:08:42.250131  754935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:08:42.251788  754935 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 12:08:42.253184  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 12:08:42.268059  754935 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 12:08:42.289284  754935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:08:42.289372  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:42.289435  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-054971 minikube.k8s.io/updated_at=2024_10_07T12_08_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=addons-054971 minikube.k8s.io/primary=true
	I1007 12:08:42.442331  754935 ops.go:34] apiserver oom_adj: -16
	I1007 12:08:42.442529  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:42.942986  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:43.442778  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:43.942663  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:44.442794  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:44.942804  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:45.443049  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:45.541560  754935 kubeadm.go:1113] duration metric: took 3.252260027s to wait for elevateKubeSystemPrivileges
	I1007 12:08:45.541595  754935 kubeadm.go:394] duration metric: took 14.23754191s to StartCluster
	I1007 12:08:45.541616  754935 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:45.541851  754935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:08:45.542283  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:45.542492  754935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:08:45.542518  754935 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:45.542574  754935 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 12:08:45.542685  754935 addons.go:69] Setting yakd=true in profile "addons-054971"
	I1007 12:08:45.542703  754935 addons.go:234] Setting addon yakd=true in "addons-054971"
	I1007 12:08:45.542713  754935 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-054971"
	I1007 12:08:45.542715  754935 addons.go:69] Setting gcp-auth=true in profile "addons-054971"
	I1007 12:08:45.542742  754935 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-054971"
	I1007 12:08:45.542747  754935 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-054971"
	I1007 12:08:45.542761  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.542757  754935 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-054971"
	I1007 12:08:45.542767  754935 addons.go:69] Setting default-storageclass=true in profile "addons-054971"
	I1007 12:08:45.542778  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.542784  754935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-054971"
	I1007 12:08:45.542786  754935 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-054971"
	I1007 12:08:45.542808  754935 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-054971"
	I1007 12:08:45.542839  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.542857  754935 addons.go:69] Setting volcano=true in profile "addons-054971"
	I1007 12:08:45.542756  754935 mustload.go:65] Loading cluster: addons-054971
	I1007 12:08:45.542874  754935 addons.go:234] Setting addon volcano=true in "addons-054971"
	I1007 12:08:45.542901  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543034  754935 config.go:182] Loaded profile config "addons-054971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:45.543125  754935 addons.go:69] Setting ingress-dns=true in profile "addons-054971"
	I1007 12:08:45.543146  754935 addons.go:234] Setting addon ingress-dns=true in "addons-054971"
	I1007 12:08:45.543182  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543244  754935 addons.go:69] Setting inspektor-gadget=true in profile "addons-054971"
	I1007 12:08:45.543257  754935 addons.go:234] Setting addon inspektor-gadget=true in "addons-054971"
	I1007 12:08:45.543266  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543271  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543279  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543281  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543302  754935 addons.go:69] Setting volumesnapshots=true in profile "addons-054971"
	I1007 12:08:45.543343  754935 addons.go:69] Setting ingress=true in profile "addons-054971"
	I1007 12:08:45.543371  754935 addons.go:69] Setting storage-provisioner=true in profile "addons-054971"
	I1007 12:08:45.543410  754935 addons.go:234] Setting addon storage-provisioner=true in "addons-054971"
	I1007 12:08:45.543446  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543494  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543305  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543538  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543538  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543539  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543576  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543599  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543627  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543654  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543374  754935 addons.go:234] Setting addon ingress=true in "addons-054971"
	I1007 12:08:45.543347  754935 addons.go:234] Setting addon volumesnapshots=true in "addons-054971"
	I1007 12:08:45.543757  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543773  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543872  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543905  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.544073  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543308  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543318  754935 addons.go:69] Setting cloud-spanner=true in profile "addons-054971"
	I1007 12:08:45.544103  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.544114  754935 addons.go:234] Setting addon cloud-spanner=true in "addons-054971"
	I1007 12:08:45.543317  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543266  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.544169  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543329  754935 addons.go:69] Setting metrics-server=true in profile "addons-054971"
	I1007 12:08:45.544184  754935 addons.go:234] Setting addon metrics-server=true in "addons-054971"
	I1007 12:08:45.543345  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.544219  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543384  754935 config.go:182] Loaded profile config "addons-054971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:45.543331  754935 addons.go:69] Setting registry=true in profile "addons-054971"
	I1007 12:08:45.544261  754935 addons.go:234] Setting addon registry=true in "addons-054971"
	I1007 12:08:45.544353  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.544380  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.544444  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.544638  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.544787  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.545180  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.545211  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.547032  754935 out.go:177] * Verifying Kubernetes components...
	I1007 12:08:45.548525  754935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:45.564301  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I1007 12:08:45.564554  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I1007 12:08:45.564711  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.565207  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.565322  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I1007 12:08:45.565817  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.566054  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.566076  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.566305  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.566324  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.566584  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I1007 12:08:45.566754  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I1007 12:08:45.566921  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.567030  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.567171  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.567668  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.567708  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.567893  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.567915  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.567973  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.567993  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.567993  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I1007 12:08:45.568095  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.568224  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.568510  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.568457  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.568750  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.574551  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.574662  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.574712  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.574771  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.574852  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.574872  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.575124  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.575531  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I1007 12:08:45.574547  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.575696  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.576176  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.576218  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.580153  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.580536  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.580587  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.581222  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.581243  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.581761  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.581827  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.582385  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.582429  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.582638  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.582659  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.583039  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.583204  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.586262  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.586675  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.586725  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.599453  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I1007 12:08:45.599454  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I1007 12:08:45.600556  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.600673  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.601604  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.601624  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.601751  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.601764  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.602628  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.602681  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.603354  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.603405  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.604007  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.604058  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.607737  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I1007 12:08:45.608158  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.608675  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.608700  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.609049  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.609621  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.609676  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.611546  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I1007 12:08:45.612193  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.612323  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1007 12:08:45.612426  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I1007 12:08:45.613152  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.613263  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.613336  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I1007 12:08:45.613892  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.614117  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.614130  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.614273  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.614288  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.614356  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1007 12:08:45.614763  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.614905  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.615343  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.615363  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.615482  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.615536  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.615771  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.615832  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.616043  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.616062  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.616183  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.616195  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.616605  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.616637  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.617229  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.617434  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.618320  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.618590  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.618645  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.620600  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I1007 12:08:45.621190  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.621824  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.622105  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:45.622129  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:45.623250  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:45.623301  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:45.623310  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:45.623319  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:45.623326  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:45.624997  754935 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 12:08:45.626528  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.626858  754935 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 12:08:45.626879  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 12:08:45.626902  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.629046  754935 addons.go:234] Setting addon default-storageclass=true in "addons-054971"
	I1007 12:08:45.629105  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.629490  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.629533  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.631557  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.631642  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.631674  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.631697  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.631828  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.632047  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.632213  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.633477  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34525
	I1007 12:08:45.634071  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1007 12:08:45.634788  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.635427  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.635478  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.635869  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.636479  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.636523  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.636744  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:45.636776  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:45.636799  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.636804  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 12:08:45.636930  754935 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 12:08:45.637576  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.637594  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.638105  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.638340  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.640225  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.642183  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:08:45.642841  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.642865  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.643548  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.643834  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.644454  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 12:08:45.644919  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1007 12:08:45.645642  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.646202  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.646225  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.646294  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.646464  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:08:45.647061  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.647192  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I1007 12:08:45.647278  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.647654  754935 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 12:08:45.647679  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 12:08:45.647700  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.647706  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.647906  754935 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 12:08:45.648905  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.648934  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.649080  754935 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 12:08:45.649097  754935 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 12:08:45.649118  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.649744  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.650248  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.650572  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.651096  754935 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 12:08:45.651435  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.652132  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.652173  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.652227  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 12:08:45.652244  754935 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 12:08:45.652275  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.652353  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.652515  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.652641  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.652768  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.654410  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.654692  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.654712  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.654864  754935 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-054971"
	I1007 12:08:45.654916  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.654985  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.655145  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.655294  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.655302  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.655347  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.655410  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.656131  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I1007 12:08:45.656173  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38459
	I1007 12:08:45.656698  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.657178  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.658969  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1007 12:08:45.658980  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.659095  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.659115  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.659135  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.659145  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.659157  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.659640  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.659646  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.659695  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.659857  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.660515  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.660684  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I1007 12:08:45.660686  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.661597  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.662445  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.662465  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.662539  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.662809  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.662957  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.663988  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.664010  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.664400  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.664417  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.664788  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.664841  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.664865  754935 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 12:08:45.665091  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.665290  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.665697  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.665732  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.666914  754935 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 12:08:45.666936  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 12:08:45.666974  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.666961  754935 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 12:08:45.668139  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.668865  754935 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 12:08:45.668884  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 12:08:45.669132  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.670327  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 12:08:45.670638  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.671202  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.671224  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.671265  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.671632  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.671900  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.672083  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I1007 12:08:45.672095  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.672709  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 12:08:45.673580  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.674336  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.674358  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.674751  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.674817  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45891
	I1007 12:08:45.674970  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.676100  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.676783  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 12:08:45.676987  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.678090  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.678169  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.678184  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.678624  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.678795  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.678972  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.679103  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.679240  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.679261  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.679693  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.679779  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 12:08:45.679837  754935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:08:45.680012  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.680715  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I1007 12:08:45.681266  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.681628  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.681694  754935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:08:45.681711  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:08:45.681730  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.681743  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.681760  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.682185  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.682680  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.682750  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I1007 12:08:45.683265  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I1007 12:08:45.683505  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 12:08:45.683507  754935 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 12:08:45.683885  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.684189  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.684668  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.684694  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.684876  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 12:08:45.684893  754935 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 12:08:45.684952  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.685177  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.685193  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.685258  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.685321  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I1007 12:08:45.685468  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.685611  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.685753  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.685764  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.686329  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 12:08:45.686348  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.686508  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.687340  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.686672  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.687180  754935 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 12:08:45.687975  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.688135  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.688394  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.688669  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.688785  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.688836  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.689005  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.689117  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.689157  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.689173  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.689272  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.689326  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.689455  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.689410  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.689536  754935 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 12:08:45.689563  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 12:08:45.689598  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.689639  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.690230  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.691679  754935 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 12:08:45.691700  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 12:08:45.691720  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.691787  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 12:08:45.692869  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 12:08:45.692934  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 12:08:45.692949  754935 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 12:08:45.692971  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.694266  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 12:08:45.694294  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 12:08:45.694318  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.695427  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.696571  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.696605  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.696901  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.697184  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.697386  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.698141  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.698186  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.698783  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.698807  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.698908  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.699014  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.699069  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.699127  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.699753  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.700301  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.700314  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.700668  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.700790  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.700916  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.701020  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	W1007 12:08:45.701859  754935 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58134->192.168.39.62:22: read: connection reset by peer
	I1007 12:08:45.701887  754935 retry.go:31] will retry after 353.537159ms: ssh: handshake failed: read tcp 192.168.39.1:58134->192.168.39.62:22: read: connection reset by peer
	I1007 12:08:45.703218  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I1007 12:08:45.703730  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.704270  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.704298  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.704688  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.704898  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.706805  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.707157  754935 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:08:45.707179  754935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:08:45.707201  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.710543  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.711036  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.711070  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.711246  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.711430  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.711593  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.711759  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1007 12:08:45.711762  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.712272  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.712778  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.712800  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.713128  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.713313  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.715110  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.717148  754935 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 12:08:45.718693  754935 out.go:177]   - Using image docker.io/busybox:stable
	I1007 12:08:45.720327  754935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 12:08:45.720350  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 12:08:45.720376  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.723629  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.724138  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.724168  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.724281  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.724521  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.724648  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.724755  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.982189  754935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:45.982242  754935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:08:46.008808  754935 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 12:08:46.008836  754935 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 12:08:46.041021  754935 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 12:08:46.041056  754935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 12:08:46.106083  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 12:08:46.156011  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 12:08:46.156044  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 12:08:46.160477  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 12:08:46.183422  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:08:46.203972  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 12:08:46.206238  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 12:08:46.216447  754935 node_ready.go:35] waiting up to 6m0s for node "addons-054971" to be "Ready" ...
	I1007 12:08:46.220662  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:08:46.228496  754935 node_ready.go:49] node "addons-054971" has status "Ready":"True"
	I1007 12:08:46.228535  754935 node_ready.go:38] duration metric: took 12.032192ms for node "addons-054971" to be "Ready" ...
	I1007 12:08:46.228550  754935 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:46.229043  754935 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 12:08:46.229067  754935 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 12:08:46.259577  754935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.267553  754935 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 12:08:46.267579  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 12:08:46.332317  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 12:08:46.332345  754935 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 12:08:46.359856  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 12:08:46.359885  754935 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 12:08:46.390671  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 12:08:46.447060  754935 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 12:08:46.447093  754935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 12:08:46.469999  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 12:08:46.625565  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 12:08:46.625602  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 12:08:46.636197  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 12:08:46.636228  754935 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 12:08:46.706690  754935 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 12:08:46.706717  754935 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 12:08:46.794862  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:08:46.794891  754935 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 12:08:46.801338  754935 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 12:08:46.801372  754935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 12:08:46.966259  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 12:08:46.966286  754935 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 12:08:46.986946  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 12:08:46.986990  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 12:08:47.120617  754935 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 12:08:47.120645  754935 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 12:08:47.144043  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 12:08:47.144073  754935 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 12:08:47.181400  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:08:47.206188  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 12:08:47.206217  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 12:08:47.284880  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 12:08:47.284907  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 12:08:47.503891  754935 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 12:08:47.503926  754935 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 12:08:47.549962  754935 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:08:47.549999  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 12:08:47.560887  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 12:08:47.726195  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 12:08:47.726225  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 12:08:47.877361  754935 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 12:08:47.877421  754935 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 12:08:47.885192  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:08:48.026002  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 12:08:48.026070  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 12:08:48.144376  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 12:08:48.144414  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 12:08:48.194374  754935 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 12:08:48.194405  754935 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 12:08:48.274233  754935 pod_ready.go:103] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:48.485862  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 12:08:48.485978  754935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 12:08:48.515469  754935 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.533183499s)
	I1007 12:08:48.515512  754935 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:08:48.554460  754935 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 12:08:48.554494  754935 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 12:08:48.815502  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 12:08:48.815538  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 12:08:48.941914  754935 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 12:08:48.941951  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 12:08:49.029669  754935 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-054971" context rescaled to 1 replicas
	I1007 12:08:49.316099  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 12:08:49.329283  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 12:08:49.329315  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 12:08:49.578162  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 12:08:49.578196  754935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 12:08:49.935675  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 12:08:50.291932  754935 pod_ready.go:103] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:52.320613  754935 pod_ready.go:103] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:52.697432  754935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 12:08:52.697482  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:52.700919  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:52.701409  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:52.701443  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:52.701676  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:52.701949  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:52.702192  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:52.702387  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:52.774863  754935 pod_ready.go:93] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:52.774889  754935 pod_ready.go:82] duration metric: took 6.515270309s for pod "etcd-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:52.774903  754935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:53.450019  754935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 12:08:53.721169  754935 addons.go:234] Setting addon gcp-auth=true in "addons-054971"
	I1007 12:08:53.721243  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:53.721580  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:53.721638  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:53.738245  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I1007 12:08:53.738924  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:53.739520  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:53.739549  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:53.739937  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:53.740581  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:53.740638  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:53.757383  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I1007 12:08:53.757915  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:53.758429  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:53.758453  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:53.758859  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:53.759066  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:53.760830  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:53.761093  754935 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 12:08:53.761131  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:53.763845  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:53.764290  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:53.764325  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:53.764492  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:53.764667  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:53.764823  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:53.765009  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:54.850890  754935 pod_ready.go:103] pod "kube-apiserver-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:55.453313  754935 pod_ready.go:93] pod "kube-apiserver-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:55.453345  754935 pod_ready.go:82] duration metric: took 2.678432172s for pod "kube-apiserver-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.453361  754935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.496750  754935 pod_ready.go:93] pod "kube-controller-manager-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:55.496777  754935 pod_ready.go:82] duration metric: took 43.407725ms for pod "kube-controller-manager-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.496788  754935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.529741  754935 pod_ready.go:93] pod "kube-scheduler-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:55.529769  754935 pod_ready.go:82] duration metric: took 32.973081ms for pod "kube-scheduler-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.529779  754935 pod_ready.go:39] duration metric: took 9.301214659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:55.529808  754935 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:08:55.529865  754935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:08:55.836162  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.730027198s)
	I1007 12:08:55.836230  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.675714214s)
	I1007 12:08:55.836275  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836290  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836294  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.652839471s)
	I1007 12:08:55.836318  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836334  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836371  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.63236799s)
	I1007 12:08:55.836432  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.615742381s)
	I1007 12:08:55.836456  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836472  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836492  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.445788034s)
	I1007 12:08:55.836456  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836667  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836855  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.655416126s)
	I1007 12:08:55.836236  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836886  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836913  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836923  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836930  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.276000758s)
	I1007 12:08:55.836409  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.630146814s)
	I1007 12:08:55.836948  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836961  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836963  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836969  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836605  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.836626  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.837004  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.837012  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837018  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836642  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837052  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836741  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.837082  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.837089  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837094  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836755  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.836755  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.36668111s)
	I1007 12:08:55.837248  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837257  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836764  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.837302  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.837309  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837315  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.837318  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.952087176s)
	I1007 12:08:55.836769  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	W1007 12:08:55.837351  754935 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 12:08:55.837394  754935 retry.go:31] will retry after 216.590122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 12:08:55.837491  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.52134421s)
	I1007 12:08:55.837524  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837535  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839150  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839255  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839277  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839294  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839310  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839427  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839450  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839504  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839525  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839632  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839661  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839687  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839701  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839707  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839738  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839769  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839790  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839807  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839829  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839836  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839847  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839696  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839738  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839809  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839830  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840125  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840135  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.840143  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839773  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840192  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840493  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.840515  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.840540  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840546  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840638  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.840662  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840670  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840679  754935 addons.go:475] Verifying addon metrics-server=true in "addons-054971"
	I1007 12:08:55.841255  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.841284  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.841290  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.841300  754935 addons.go:475] Verifying addon registry=true in "addons-054971"
	I1007 12:08:55.842366  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842397  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842403  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842412  754935 addons.go:475] Verifying addon ingress=true in "addons-054971"
	I1007 12:08:55.842540  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842551  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842559  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.842565  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.842612  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842628  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842638  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842645  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842649  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842652  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.842655  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842659  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.842916  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842940  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842947  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839715  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.843094  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.843113  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839723  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843269  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843353  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.843366  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.843375  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843410  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.843416  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.843739  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843767  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.843774  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.845013  754935 out.go:177] * Verifying registry addon...
	I1007 12:08:55.845126  754935 out.go:177] * Verifying ingress addon...
	I1007 12:08:55.846770  754935 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-054971 service yakd-dashboard -n yakd-dashboard
	
	I1007 12:08:55.847754  754935 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 12:08:55.847885  754935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 12:08:55.880361  754935 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 12:08:55.880388  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:55.880638  754935 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 12:08:55.880664  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:55.898882  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.898915  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.899233  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.899253  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.904127  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.904150  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.904452  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.904463  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.904479  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 12:08:55.904579  754935 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1007 12:08:56.054489  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:08:56.354370  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:56.355745  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:56.756460  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.820728204s)
	I1007 12:08:56.756534  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:56.756538  754935 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.995412518s)
	I1007 12:08:56.756592  754935 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.226707172s)
	I1007 12:08:56.756623  754935 api_server.go:72] duration metric: took 11.214072275s to wait for apiserver process to appear ...
	I1007 12:08:56.756636  754935 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:08:56.756552  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:56.756664  754935 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I1007 12:08:56.756932  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:56.756948  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:56.756958  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:56.756964  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:56.757192  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:56.757205  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:56.757218  754935 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-054971"
	I1007 12:08:56.759144  754935 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 12:08:56.759144  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:08:56.760816  754935 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 12:08:56.761441  754935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 12:08:56.762459  754935 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 12:08:56.762485  754935 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 12:08:56.825069  754935 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I1007 12:08:56.826829  754935 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 12:08:56.826852  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:56.827978  754935 api_server.go:141] control plane version: v1.31.1
	I1007 12:08:56.828001  754935 api_server.go:131] duration metric: took 71.356494ms to wait for apiserver health ...
	I1007 12:08:56.828013  754935 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:08:56.856006  754935 system_pods.go:59] 18 kube-system pods found
	I1007 12:08:56.856047  754935 system_pods.go:61] "coredns-7c65d6cfc9-4hjxz" [0c0e4892-3fa9-48d3-817a-849a323b94c1] Running
	I1007 12:08:56.856054  754935 system_pods.go:61] "coredns-7c65d6cfc9-crd5w" [a29dac23-0aea-4b3e-9a36-6a4631124b86] Running
	I1007 12:08:56.856064  754935 system_pods.go:61] "csi-hostpath-attacher-0" [8cf94124-02f6-4ca0-a0ed-a0451f57672f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 12:08:56.856072  754935 system_pods.go:61] "csi-hostpath-resizer-0" [85aef89b-cbdf-4f43-9f6b-c28b0ddb19c5] Pending
	I1007 12:08:56.856084  754935 system_pods.go:61] "csi-hostpathplugin-drczb" [dd5db9a2-ce24-463e-abd7-3d0e4ff66cb3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 12:08:56.856091  754935 system_pods.go:61] "etcd-addons-054971" [faabeedb-9d17-4edf-8213-28f0cfc6c6e4] Running
	I1007 12:08:56.856099  754935 system_pods.go:61] "kube-apiserver-addons-054971" [1c00ede0-c30d-42bd-9575-9c06801f6d8a] Running
	I1007 12:08:56.856104  754935 system_pods.go:61] "kube-controller-manager-addons-054971" [44c0feb8-0a14-41c2-8b98-9f6ddb7d979f] Running
	I1007 12:08:56.856113  754935 system_pods.go:61] "kube-ingress-dns-minikube" [06754245-57c9-4323-bfce-bbbe4c9f27ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1007 12:08:56.856121  754935 system_pods.go:61] "kube-proxy-h7ccq" [80f6db92-9b23-4fb4-8fac-2a32f9da0874] Running
	I1007 12:08:56.856129  754935 system_pods.go:61] "kube-scheduler-addons-054971" [c3f1df88-f63c-47cd-a7df-594a861f6101] Running
	I1007 12:08:56.856139  754935 system_pods.go:61] "metrics-server-84c5f94fbc-hglsg" [bc1b53d0-93d0-4734-bfbe-9b7172391a6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:08:56.856154  754935 system_pods.go:61] "nvidia-device-plugin-daemonset-285h8" [cf2c616e-a6ca-4d0d-8e9b-c62ea66a2246] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 12:08:56.856165  754935 system_pods.go:61] "registry-66c9cd494c-77gfb" [256d2114-d21b-4d85-a9d9-a1f7e3e0a43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 12:08:56.856174  754935 system_pods.go:61] "registry-proxy-vjrwk" [bdc2b33d-c287-48c5-a525-9c0e3933f162] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 12:08:56.856187  754935 system_pods.go:61] "snapshot-controller-56fcc65765-2rx2g" [b676e4bc-336d-421a-b68e-c54457192fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.856196  754935 system_pods.go:61] "snapshot-controller-56fcc65765-7khhx" [b27bbafd-fba3-4526-b91f-ccfdcf2cf397] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.856202  754935 system_pods.go:61] "storage-provisioner" [48ad7da8-0680-4936-ac5a-a4de591e0b9c] Running
	I1007 12:08:56.856211  754935 system_pods.go:74] duration metric: took 28.19103ms to wait for pod list to return data ...
	I1007 12:08:56.856224  754935 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:08:56.875257  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:56.877165  754935 default_sa.go:45] found service account: "default"
	I1007 12:08:56.877194  754935 default_sa.go:55] duration metric: took 20.961727ms for default service account to be created ...
	I1007 12:08:56.877206  754935 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:08:56.877465  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:56.889700  754935 system_pods.go:86] 18 kube-system pods found
	I1007 12:08:56.889739  754935 system_pods.go:89] "coredns-7c65d6cfc9-4hjxz" [0c0e4892-3fa9-48d3-817a-849a323b94c1] Running
	I1007 12:08:56.889745  754935 system_pods.go:89] "coredns-7c65d6cfc9-crd5w" [a29dac23-0aea-4b3e-9a36-6a4631124b86] Running
	I1007 12:08:56.889752  754935 system_pods.go:89] "csi-hostpath-attacher-0" [8cf94124-02f6-4ca0-a0ed-a0451f57672f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 12:08:56.889760  754935 system_pods.go:89] "csi-hostpath-resizer-0" [85aef89b-cbdf-4f43-9f6b-c28b0ddb19c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 12:08:56.889770  754935 system_pods.go:89] "csi-hostpathplugin-drczb" [dd5db9a2-ce24-463e-abd7-3d0e4ff66cb3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 12:08:56.889775  754935 system_pods.go:89] "etcd-addons-054971" [faabeedb-9d17-4edf-8213-28f0cfc6c6e4] Running
	I1007 12:08:56.889779  754935 system_pods.go:89] "kube-apiserver-addons-054971" [1c00ede0-c30d-42bd-9575-9c06801f6d8a] Running
	I1007 12:08:56.889783  754935 system_pods.go:89] "kube-controller-manager-addons-054971" [44c0feb8-0a14-41c2-8b98-9f6ddb7d979f] Running
	I1007 12:08:56.889788  754935 system_pods.go:89] "kube-ingress-dns-minikube" [06754245-57c9-4323-bfce-bbbe4c9f27ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1007 12:08:56.889792  754935 system_pods.go:89] "kube-proxy-h7ccq" [80f6db92-9b23-4fb4-8fac-2a32f9da0874] Running
	I1007 12:08:56.889795  754935 system_pods.go:89] "kube-scheduler-addons-054971" [c3f1df88-f63c-47cd-a7df-594a861f6101] Running
	I1007 12:08:56.889801  754935 system_pods.go:89] "metrics-server-84c5f94fbc-hglsg" [bc1b53d0-93d0-4734-bfbe-9b7172391a6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:08:56.889809  754935 system_pods.go:89] "nvidia-device-plugin-daemonset-285h8" [cf2c616e-a6ca-4d0d-8e9b-c62ea66a2246] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 12:08:56.889815  754935 system_pods.go:89] "registry-66c9cd494c-77gfb" [256d2114-d21b-4d85-a9d9-a1f7e3e0a43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 12:08:56.889820  754935 system_pods.go:89] "registry-proxy-vjrwk" [bdc2b33d-c287-48c5-a525-9c0e3933f162] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 12:08:56.889825  754935 system_pods.go:89] "snapshot-controller-56fcc65765-2rx2g" [b676e4bc-336d-421a-b68e-c54457192fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.889831  754935 system_pods.go:89] "snapshot-controller-56fcc65765-7khhx" [b27bbafd-fba3-4526-b91f-ccfdcf2cf397] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.889835  754935 system_pods.go:89] "storage-provisioner" [48ad7da8-0680-4936-ac5a-a4de591e0b9c] Running
	I1007 12:08:56.889844  754935 system_pods.go:126] duration metric: took 12.630727ms to wait for k8s-apps to be running ...
	I1007 12:08:56.889853  754935 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:08:56.889908  754935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:56.964918  754935 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 12:08:56.964955  754935 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 12:08:57.077542  754935 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 12:08:57.077570  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 12:08:57.135548  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 12:08:57.266343  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:57.353253  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:57.353330  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:57.769639  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:57.853163  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:57.853627  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:58.269889  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:58.352912  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:58.353071  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:58.624474  754935 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.734536155s)
	I1007 12:08:58.624589  754935 system_svc.go:56] duration metric: took 1.734729955s WaitForService to wait for kubelet
	I1007 12:08:58.624607  754935 kubeadm.go:582] duration metric: took 13.082055472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:08:58.624634  754935 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:08:58.624533  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.569981166s)
	I1007 12:08:58.624696  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.624715  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.625035  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.625055  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.625065  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.625072  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.625115  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:58.625284  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.625313  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.628190  754935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:58.628220  754935 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:58.628233  754935 node_conditions.go:105] duration metric: took 3.590382ms to run NodePressure ...
	I1007 12:08:58.628248  754935 start.go:241] waiting for startup goroutines ...
	I1007 12:08:58.767925  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:58.885422  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:58.885765  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:58.915303  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.779703404s)
	I1007 12:08:58.915365  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.915383  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.915719  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.915739  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.915748  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:58.915757  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.915774  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.916032  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.916057  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.917239  754935 addons.go:475] Verifying addon gcp-auth=true in "addons-054971"
	I1007 12:08:58.917581  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:58.918962  754935 out.go:177] * Verifying gcp-auth addon...
	I1007 12:08:58.920707  754935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 12:08:58.981034  754935 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 12:08:58.981083  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:08:59.270266  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:59.369205  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:59.369975  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:59.424588  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:08:59.767379  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:59.853162  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:59.853322  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:59.924874  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:00.266994  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:00.353125  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:00.353641  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:00.425815  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:00.769125  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:00.852627  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:00.852852  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:00.924629  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:01.265766  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:01.353507  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:01.353654  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:01.424442  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:01.768449  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:01.853613  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:01.854037  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:01.925089  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:02.266985  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:02.353094  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:02.353767  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:02.423737  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:02.766533  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:02.852776  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:02.853310  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:02.924827  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:03.266180  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:03.352684  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:03.353284  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:03.424810  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:03.767280  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:03.852859  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:03.853181  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:03.926551  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:04.266926  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:04.353042  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:04.353201  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:04.425034  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:04.766382  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:04.852544  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:04.853056  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:04.924839  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:05.266262  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:05.352437  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:05.352823  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:05.424280  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:05.766788  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:05.852411  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:05.852959  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:05.925163  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:06.266760  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:06.354501  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:06.355969  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:06.425350  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:06.766752  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:06.853227  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:06.853550  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:06.925891  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:07.266350  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:07.353153  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:07.353767  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:07.425389  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:07.767002  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:07.852878  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:07.853371  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:07.925196  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:08.266684  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:08.352320  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:08.353035  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:08.424523  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:08.766190  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:08.852754  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:08.853252  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:08.925127  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:09.267509  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:09.352084  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:09.352405  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:09.424308  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:09.767184  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:09.851383  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:09.851943  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:09.925759  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:10.266292  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:10.352038  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:10.353248  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:10.428569  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:10.945862  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:10.947533  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:10.947953  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:10.949652  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:11.266808  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:11.352282  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:11.352966  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:11.424565  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:11.767027  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:11.852780  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:11.853312  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:11.924743  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:12.267917  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:12.354818  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:12.354822  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:12.460273  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:12.768272  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:12.852646  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:12.852704  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:12.924571  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:13.266363  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:13.352291  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:13.352840  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:13.424497  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:13.767792  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:13.866097  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:13.866392  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:13.924917  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:14.267258  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:14.352471  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:14.352882  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:14.424903  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:14.767195  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:14.852080  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:14.852820  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:14.924872  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:15.266426  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:15.352399  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:15.352593  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:15.424436  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:15.766194  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:15.853744  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:15.854256  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:15.924382  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:16.498802  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:16.500298  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:16.500940  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:16.501136  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:16.766823  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:16.853600  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:16.854128  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:16.924537  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:17.267373  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:17.352644  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:17.353055  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:17.424830  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:17.770753  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:17.866012  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:17.866328  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:17.925077  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:18.266797  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:18.352784  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:18.353249  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:18.424539  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:18.766248  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:18.852406  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:18.854819  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:18.925036  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:19.267099  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:19.352192  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:19.352713  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:19.424706  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:19.765953  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:19.854548  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:19.854968  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:19.924552  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:20.272921  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:20.352386  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:20.352747  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:20.424593  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:20.766631  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:20.853208  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:20.854193  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:20.924869  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:21.267417  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:21.353199  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:21.353610  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:21.424192  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:21.767187  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:21.853769  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:21.853880  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:21.924527  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:22.270751  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:22.353719  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:22.354353  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:22.423982  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:22.770386  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:22.852511  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:22.852766  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:22.924534  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:23.266759  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:23.353265  754935 kapi.go:107] duration metric: took 27.505373211s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 12:09:23.353569  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:23.424786  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:23.765973  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:23.853140  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:23.932052  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:24.267319  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:24.355619  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:24.425688  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:24.766437  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:24.852655  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:24.924453  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:25.267523  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:25.352315  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:25.425133  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:25.767624  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:25.867377  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:25.966696  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:26.276636  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:26.362199  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:26.425377  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:26.767150  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:26.853058  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:26.924640  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:27.265767  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:27.365240  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:27.424878  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:27.766565  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:27.866980  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:27.924387  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:28.268699  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:28.353790  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:28.424246  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:28.766950  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:28.852657  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:28.924976  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:29.266127  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:29.352431  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:29.424300  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:29.768661  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:29.853940  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:30.103123  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:30.286115  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:30.387535  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:30.424342  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:30.767717  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:30.851892  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:30.925548  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:31.266559  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:31.352942  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:31.424545  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:31.767683  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:31.853372  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:31.924541  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:32.494365  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:32.494974  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:32.495155  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:32.767575  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:32.852251  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:32.925132  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:33.268381  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:33.353082  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:33.425160  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:33.766867  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:33.851862  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:33.924545  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:34.268091  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:34.351857  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:34.424371  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:34.765859  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:34.851795  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:34.925125  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:35.268649  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:35.352398  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:35.424882  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:35.767355  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:35.852793  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:35.924543  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:36.266951  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:36.373259  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:36.466696  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:36.766379  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:36.852580  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:36.952382  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:37.267559  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:37.352116  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:37.424362  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:37.766743  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:37.852633  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:37.924879  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:38.267249  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:38.367943  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:38.426503  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:38.766731  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:38.851596  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:38.932390  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:39.267337  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:39.352631  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:39.424636  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:39.789346  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:39.888521  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:39.924875  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:40.267361  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:40.353099  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:40.424750  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:40.767391  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:40.851753  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:40.925807  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:41.266655  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:41.352388  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:41.425381  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:41.767235  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:41.853131  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:41.925362  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:42.266631  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:42.367801  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:42.424173  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:42.814241  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:42.861318  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:42.931564  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:43.269098  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:43.353398  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:43.425203  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:43.789726  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:43.879474  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:43.979494  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:44.267436  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:44.352937  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:44.423892  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:44.778559  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:44.852534  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:44.926925  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:45.266966  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:45.352710  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:45.424474  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:45.766470  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:45.852957  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:45.924810  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:46.266386  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:46.352077  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:46.424494  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:46.767310  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:46.853293  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:46.924948  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:47.270288  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:47.352557  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:47.424096  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:47.773226  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:47.853477  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:47.925278  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:48.266619  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:48.352509  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:48.424845  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:48.767847  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:48.853371  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:48.924754  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:49.266835  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:49.352290  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:49.425393  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:49.771811  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:49.874544  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:49.967536  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:50.266594  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:50.353520  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:50.423929  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:50.767844  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:50.852663  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:50.927497  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:51.267700  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:51.351929  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:51.424934  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:51.766470  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:51.853255  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:51.925107  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:52.267149  754935 kapi.go:107] duration metric: took 55.505705179s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 12:09:52.367849  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:52.424233  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:52.853622  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:52.926552  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:53.353261  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:53.425551  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:53.859753  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:53.926785  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:54.352129  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:54.425419  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:54.853884  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:54.924826  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:55.352372  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:55.424246  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:55.851706  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:55.924167  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:56.353304  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:56.424983  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:56.853036  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:56.925262  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:57.356363  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:57.425307  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:57.852836  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:57.924421  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:58.352935  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:58.425098  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:58.857757  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:58.924406  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:59.353015  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:59.424665  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:59.853618  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:59.924109  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:00.353176  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:00.424700  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:00.852733  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:00.924286  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:01.353118  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:01.425554  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:01.852987  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:01.925156  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:02.355988  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:02.425561  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:02.853731  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:02.924606  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:03.358359  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:03.426677  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:03.853043  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:03.924421  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:04.353398  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:04.425286  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:04.853118  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:04.924547  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:05.353115  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:05.424764  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:05.852319  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:05.924857  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:06.352547  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:06.435026  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:06.860538  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:06.923877  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:07.352359  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:07.424971  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:07.852261  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:07.924949  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:08.352685  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:08.430432  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:08.855282  754935 kapi.go:107] duration metric: took 1m13.007522676s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 12:10:08.953754  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:09.424811  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:09.924532  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:10.424655  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:10.937460  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:11.425194  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:11.925034  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:12.424800  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:12.925416  754935 kapi.go:107] duration metric: took 1m14.004702482s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 12:10:12.927746  754935 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-054971 cluster.
	I1007 12:10:12.929654  754935 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 12:10:12.931141  754935 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 12:10:12.932790  754935 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 12:10:12.934127  754935 addons.go:510] duration metric: took 1m27.391558313s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 12:10:12.934173  754935 start.go:246] waiting for cluster config update ...
	I1007 12:10:12.934199  754935 start.go:255] writing updated cluster config ...
	I1007 12:10:12.934493  754935 ssh_runner.go:195] Run: rm -f paused
	I1007 12:10:12.993626  754935 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:10:12.995881  754935 out.go:177] * Done! kubectl is now configured to use "addons-054971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.464426336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303664464395829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573887,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17b7e8ab-5978-4512-97eb-416bcf716899 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.465229502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2668365-1d12-46c3-aaca-68e3ac286b21 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.465313275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2668365-1d12-46c3-aaca-68e3ac286b21 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.465718002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0870b3d2b0fe601fbbaaacf4f50a47a17d2ed7033784a5bf0d7a353437c42c53,PodSandboxId:12eb5c9039194f674d11c33c381bb55162801b32d1fa0a7297be5dbb9e0e1289,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728303007997241669,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sf2cd,io.kubernetes.pod.namespace:
ingress-nginx,io.kubernetes.pod.uid: 9cc8b4f3-5772-4869-9516-4d9d30f44c83,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a3d6e06e88bbcacfa898579b6f3423c93a7fdb38103043bb0e479279d10cbf79,PodSandboxId:9c73795a20b13fbf00bd4767777f54a26982777a8ab4c6fae764829f64860af7,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728302976629498931,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6l97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de5d0723-ba8b-46ce-8ba1-7d87388aad67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520e972e15860c2d105c8db1ed7bd5db66078bfef5706fcc22a934176643693b,PodSandboxId:d19a00fe59c43ce3dd582a5667af457ee1fc33e654cfd4338c8d43139ad43976,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e
6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728302976287254166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29nsm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77993de5-0558-4cb9-a30b-3d92c37582f3,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec7
5e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d1ed92adaea98bfc26c5899acebf3f0e61ffadb888b7fe810260d6ef9587d9,PodSandboxId:c83546f8eae30db931f025ec39c4331285f21e6335de9c33e1d1c007a70ceb25,Metadata:
&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728302943764329912,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06754245-57c9-4323-bfce-bbbe4c9f27ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9c
e9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1c
d52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2668365-1d12-46c3-aaca-68e3ac286b21 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.504190542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6de6f66c-c83f-4fd3-90e1-f46df104aec0 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.504272664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6de6f66c-c83f-4fd3-90e1-f46df104aec0 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.507272524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47b5b9f8-dbae-4401-aff4-3e70cf0cf8b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.508524018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303664508494553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573887,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47b5b9f8-dbae-4401-aff4-3e70cf0cf8b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.509162659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93fd41ca-18b2-4a61-a5f5-66cc925b3edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.509237222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93fd41ca-18b2-4a61-a5f5-66cc925b3edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.509662641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0870b3d2b0fe601fbbaaacf4f50a47a17d2ed7033784a5bf0d7a353437c42c53,PodSandboxId:12eb5c9039194f674d11c33c381bb55162801b32d1fa0a7297be5dbb9e0e1289,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728303007997241669,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sf2cd,io.kubernetes.pod.namespace:
ingress-nginx,io.kubernetes.pod.uid: 9cc8b4f3-5772-4869-9516-4d9d30f44c83,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a3d6e06e88bbcacfa898579b6f3423c93a7fdb38103043bb0e479279d10cbf79,PodSandboxId:9c73795a20b13fbf00bd4767777f54a26982777a8ab4c6fae764829f64860af7,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728302976629498931,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6l97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de5d0723-ba8b-46ce-8ba1-7d87388aad67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520e972e15860c2d105c8db1ed7bd5db66078bfef5706fcc22a934176643693b,PodSandboxId:d19a00fe59c43ce3dd582a5667af457ee1fc33e654cfd4338c8d43139ad43976,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e
6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728302976287254166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29nsm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77993de5-0558-4cb9-a30b-3d92c37582f3,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec7
5e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d1ed92adaea98bfc26c5899acebf3f0e61ffadb888b7fe810260d6ef9587d9,PodSandboxId:c83546f8eae30db931f025ec39c4331285f21e6335de9c33e1d1c007a70ceb25,Metadata:
&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728302943764329912,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06754245-57c9-4323-bfce-bbbe4c9f27ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9c
e9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1c
d52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93fd41ca-18b2-4a61-a5f5-66cc925b3edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.512310816Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=74db8da0-adb4-4dad-ba65-110e017aef9c name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.512714084Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-s89lv,Uid:5740457f-53c5-4243-9e12-c18af2dffe4b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728303663197251363,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:21:02.876681339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&PodSandboxMetadata{Name:nginx,Uid:cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1728303523124806312,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:18:42.813035469Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a718404b3ae40829d27c8d587c727c1b188195e3d403496dab1b4e54de081c9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:25d5204e-dbd2-40d4-8608-1c35f98a64d1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728303013944593268,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25d5204e-dbd2-40d4-8608-1c35f98a64d1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:10:13.624636948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12eb5c9039194f674d
11c33c381bb55162801b32d1fa0a7297be5dbb9e0e1289,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-bc57996ff-sf2cd,Uid:9cc8b4f3-5772-4869-9516-4d9d30f44c83,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302999899079658,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sf2cd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9cc8b4f3-5772-4869-9516-4d9d30f44c83,pod-template-hash: bc57996ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:08:55.668828882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d19a00fe59c43ce3dd582a5667af457ee1fc33e654cfd4338c8d43139ad43976,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-29nsm,Uid:77993de5-0558-4cb9-a30b-3d92c37582f3,Namespace:ingress-nginx,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1728302936365092475,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 13c4755d-1e98-4a0a-ab4a-7c2b0cbdb8d9,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 13c4755d-1e98-4a0a-ab4a-7c2b0cbdb8d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-29nsm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77993de5-0558-4cb9-a30b-3d92c37582f3,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:08:55.709202098Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c73795a20b13fbf00bd4767777f54a26982777a8ab4c6fae764829f64860af7,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-b6l97,Uid:de5d0723-ba8b-46ce-8ba1-7d87388aad67,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,C
reatedAt:1728302936295838611,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 41bf8fb6-2a86-4caa-a240-71b6e30ff0bf,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 41bf8fb6-2a86-4caa-a240-71b6e30ff0bf,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6l97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de5d0723-ba8b-46ce-8ba1-7d87388aad67,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:08:55.795880719Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-hglsg,Uid:bc1b53d0-93d0-4734-bfbe-9b7172391a6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302932326150265,Labels
:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:08:51.707552400Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:48ad7da8-0680-4936-ac5a-a4de591e0b9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302931924461548,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-config
uration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T12:08:51.306686338Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c83546f8eae30db931f025ec39c4331285f21e6335de9c33e1d1c007a70ceb25,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:06754245-57c9-4323-bfce-bbbe4c9f27ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,Created
At:1728302929893445201,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06754245-57c9-4323-bfce-bbbe4c9f27ac,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol
\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-10-07T12:08:49.281825394Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-crd5w,Uid:a29dac23-0aea-4b3e-9a36-6a4631124b86,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302926902697686,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:08:46.552718499Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&PodSandboxMetadata{Name:kube-proxy-h7ccq,Uid:80f6db92-9b23-4fb4-8fac-2a
32f9da0874,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302926768353219,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:08:46.414682629Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-054971,Uid:30db22b1e86da3ab0b0edc6ea43ef0f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302915508133535,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,ti
er: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30db22b1e86da3ab0b0edc6ea43ef0f8,kubernetes.io/config.seen: 2024-10-07T12:08:34.822123527Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&PodSandboxMetadata{Name:etcd-addons-054971,Uid:c1a005392a92bea19217e8a14af82e23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302915497483842,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.62:2379,kubernetes.io/config.hash: c1a005392a92bea19217e8a14af82e23,kubernetes.io/config.seen: 2024-10-07T12:08:34.822124922Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:308ea20d64722aa2d
1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-054971,Uid:05b37e72a0b142ff5d421a916f914bb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302915476689110,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.62:8443,kubernetes.io/config.hash: 05b37e72a0b142ff5d421a916f914bb7,kubernetes.io/config.seen: 2024-10-07T12:08:34.822126292Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-054971,Uid:0d1bb8c38ad378b4c94d7421bbfc015b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,Create
dAt:1728302915476135532,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0d1bb8c38ad378b4c94d7421bbfc015b,kubernetes.io/config.seen: 2024-10-07T12:08:34.822119847Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=74db8da0-adb4-4dad-ba65-110e017aef9c name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.513979203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d967b9a-40f6-4921-b5b4-8fee1c979fa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.514033996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d967b9a-40f6-4921-b5b4-8fee1c979fa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.514867821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0870b3d2b0fe601fbbaaacf4f50a47a17d2ed7033784a5bf0d7a353437c42c53,PodSandboxId:12eb5c9039194f674d11c33c381bb55162801b32d1fa0a7297be5dbb9e0e1289,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728303007997241669,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-sf2cd,io.kubernetes.pod.namespace:
ingress-nginx,io.kubernetes.pod.uid: 9cc8b4f3-5772-4869-9516-4d9d30f44c83,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a3d6e06e88bbcacfa898579b6f3423c93a7fdb38103043bb0e479279d10cbf79,PodSandboxId:9c73795a20b13fbf00bd4767777f54a26982777a8ab4c6fae764829f64860af7,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728302976629498931,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6l97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de5d0723-ba8b-46ce-8ba1-7d87388aad67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520e972e15860c2d105c8db1ed7bd5db66078bfef5706fcc22a934176643693b,PodSandboxId:d19a00fe59c43ce3dd582a5667af457ee1fc33e654cfd4338c8d43139ad43976,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e
6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728302976287254166,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-29nsm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77993de5-0558-4cb9-a30b-3d92c37582f3,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec7
5e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d1ed92adaea98bfc26c5899acebf3f0e61ffadb888b7fe810260d6ef9587d9,PodSandboxId:c83546f8eae30db931f025ec39c4331285f21e6335de9c33e1d1c007a70ceb25,Metadata:
&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728302943764329912,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06754245-57c9-4323-bfce-bbbe4c9f27ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9c
e9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1c
d52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d967b9a-40f6-4921-b5b4-8fee1c979fa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.516607180Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},},}" file="otel-collector/interceptors.go:62" id=85cc4819-7061-44b6-a166-5c03ae2eff9b name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.516717283Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-s89lv,Uid:5740457f-53c5-4243-9e12-c18af2dffe4b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728303663197251363,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:21:02.876681339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=85cc4819-7061-44b6-a166-5c03ae2eff9b name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.517216980Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d13709a1-0c87-48f4-97f3-3f55769280df name=/runtime.v1.RuntimeService/PodSandboxStatus
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.517394728Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-s89lv,Uid:5740457f-53c5-4243-9e12-c18af2dffe4b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728303663197251363,Network:&PodSandboxNetworkStatus{Ip:10.244.0.31,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:21:02.876681339Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=d13709a1-0c87-48f4-97f3-3f55769280df name=/runtime.v1.RuntimeService/PodSandboxStatus
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.518186051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},},}" file="otel-collector/interceptors.go:62" id=41817489-5022-4225-b6bc-f90323ba6222 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.518394668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41817489-5022-4225-b6bc-f90323ba6222 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.518754682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41817489-5022-4225-b6bc-f90323ba6222 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.519448771Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6595d7f8-89fc-4ef6-b688-ba6acb8c161f name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 07 12:21:04 addons-054971 crio[664]: time="2024-10-07 12:21:04.519583774Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1728303664385461184,StartedAt:1728303664431626506,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kicbase/echo-server:1.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5740457f-53c5-4243-9e12-c18af2dffe4b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5740457f-53c5-4243-9e12-c18af2dffe4b/containers/hello-world-app/c75549e1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5740457f-53c5-4243-9e12-c18af2dffe4b/volumes/kubernetes.io~projected/kube-api-access-2t4kj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/
var/log/pods/default_hello-world-app-55bf9c44b4-s89lv_5740457f-53c5-4243-9e12-c18af2dffe4b/hello-world-app/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6595d7f8-89fc-4ef6-b688-ba6acb8c161f name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	0329e96353d18       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   7e46180aa3fba       hello-world-app-55bf9c44b4-s89lv
	24e8dc292329c       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   47200705a7bbe       nginx
	0870b3d2b0fe6       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago           Running             controller                0                   12eb5c9039194       ingress-nginx-controller-bc57996ff-sf2cd
	a3d6e06e88bbc       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago           Exited              patch                     1                   9c73795a20b13       ingress-nginx-admission-patch-b6l97
	520e972e15860       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago           Exited              create                    0                   d19a00fe59c43       ingress-nginx-admission-create-29nsm
	bf9b4f3989776       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago           Running             metrics-server            0                   f5e2b972cf2e9       metrics-server-84c5f94fbc-hglsg
	d0d1ed92adaea       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             12 minutes ago           Running             minikube-ingress-dns      0                   c83546f8eae30       kube-ingress-dns-minikube
	6290bd3b1143e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago           Running             storage-provisioner       0                   cde800b2a8f0d       storage-provisioner
	f8e39e62975d1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago           Running             coredns                   0                   82922f57009b8       coredns-7c65d6cfc9-crd5w
	ae25a8ac9ad8c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago           Running             kube-proxy                0                   8117d0f36c05d       kube-proxy-h7ccq
	51d1c23abfaa0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago           Running             kube-controller-manager   0                   96caf49404707       kube-controller-manager-addons-054971
	2a4c6b918b992       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago           Running             etcd                      0                   a935c7d53a494       etcd-addons-054971
	2c09712050f97       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago           Running             kube-scheduler            0                   387176b55b1d9       kube-scheduler-addons-054971
	e53eb6f322c1b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago           Running             kube-apiserver            0                   308ea20d64722       kube-apiserver-addons-054971
	
	
	==> coredns [f8e39e62975d11c5d57722dec1cd52b041c4a7f3837a1effbadf1312b703d595] <==
	[INFO] 10.244.0.7:46164 - 12400 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000461821s
	[INFO] 10.244.0.7:46164 - 37431 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000221042s
	[INFO] 10.244.0.7:46164 - 26941 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000307726s
	[INFO] 10.244.0.7:46164 - 38499 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000253019s
	[INFO] 10.244.0.7:46164 - 25469 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000202565s
	[INFO] 10.244.0.7:46164 - 48610 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000533258s
	[INFO] 10.244.0.7:46164 - 59951 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000472317s
	[INFO] 10.244.0.7:46705 - 4811 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000308402s
	[INFO] 10.244.0.7:46705 - 4510 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000390742s
	[INFO] 10.244.0.7:46835 - 11392 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069584s
	[INFO] 10.244.0.7:46835 - 11121 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031808s
	[INFO] 10.244.0.7:40750 - 50107 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065147s
	[INFO] 10.244.0.7:40750 - 49864 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093385s
	[INFO] 10.244.0.7:45733 - 15905 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063565s
	[INFO] 10.244.0.7:45733 - 15730 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044511s
	[INFO] 10.244.0.21:46982 - 16208 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000664657s
	[INFO] 10.244.0.21:49248 - 3404 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000092379s
	[INFO] 10.244.0.21:48914 - 28914 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000215437s
	[INFO] 10.244.0.21:54827 - 44749 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116202s
	[INFO] 10.244.0.21:47777 - 57831 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099191s
	[INFO] 10.244.0.21:41723 - 4687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000227365s
	[INFO] 10.244.0.21:36163 - 31266 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00123394s
	[INFO] 10.244.0.21:34390 - 64424 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00264587s
	[INFO] 10.244.0.24:48617 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00061398s
	[INFO] 10.244.0.24:43363 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000227223s
	
	
	==> describe nodes <==
	Name:               addons-054971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-054971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=addons-054971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-054971
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-054971
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:19:45 +0000   Mon, 07 Oct 2024 12:08:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:19:45 +0000   Mon, 07 Oct 2024 12:08:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:19:45 +0000   Mon, 07 Oct 2024 12:08:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:19:45 +0000   Mon, 07 Oct 2024 12:08:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    addons-054971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 66985a485f274232a41b8a9bf0356c4d
	  System UUID:                66985a48-5f27-4232-a41b-8a9bf0356c4d
	  Boot ID:                    7facc38f-b76d-4fb6-87a9-bdc599b7c391
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-s89lv            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-sf2cd    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-crd5w                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-054971                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-054971                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-054971       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-h7ccq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-054971                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-hglsg             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-054971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-054971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-054971 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-054971 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-054971 event: Registered Node addons-054971 in Controller
	
	
	==> dmesg <==
	[  +5.728444] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.086397] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.335700] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +1.336407] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.079195] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.013636] kauditd_printk_skb: 101 callbacks suppressed
	[Oct 7 12:09] kauditd_printk_skb: 80 callbacks suppressed
	[ +19.341360] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.279239] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.927615] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.717813] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.606051] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.392125] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 7 12:10] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.934403] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.973486] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 12:18] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.441161] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.040619] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.012103] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.259381] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.075935] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 7 12:19] kauditd_printk_skb: 56 callbacks suppressed
	[ +11.623596] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 7 12:21] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f] <==
	{"level":"warn","ts":"2024-10-07T12:09:32.479780Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.593042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:09:32.479820Z","caller":"traceutil/trace.go:171","msg":"trace[680849849] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:933; }","duration":"138.635745ms","start":"2024-10-07T12:09:32.341178Z","end":"2024-10-07T12:09:32.479814Z","steps":["trace[680849849] 'agreement among raft nodes before linearized reading'  (duration: 138.584444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:09:32.480207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:09:32.101875Z","time spent":"377.653315ms","remote":"127.0.0.1:46292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:923 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-07T12:09:32.479495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.838644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-84c5f94fbc-hglsg.17fc2a644bb654d6\" ","response":"range_response_count:1 size:816"}
	{"level":"info","ts":"2024-10-07T12:09:32.480557Z","caller":"traceutil/trace.go:171","msg":"trace[485250044] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-84c5f94fbc-hglsg.17fc2a644bb654d6; range_end:; response_count:1; response_revision:933; }","duration":"159.912705ms","start":"2024-10-07T12:09:32.320627Z","end":"2024-10-07T12:09:32.480540Z","steps":["trace[485250044] 'agreement among raft nodes before linearized reading'  (duration: 158.778048ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:09:39.769340Z","caller":"traceutil/trace.go:171","msg":"trace[524296785] linearizableReadLoop","detail":"{readStateIndex:1008; appliedIndex:1007; }","duration":"106.275938ms","start":"2024-10-07T12:09:39.663051Z","end":"2024-10-07T12:09:39.769327Z","steps":["trace[524296785] 'read index received'  (duration: 106.105892ms)","trace[524296785] 'applied index is now lower than readState.Index'  (duration: 169.699µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T12:09:39.769560Z","caller":"traceutil/trace.go:171","msg":"trace[1506762392] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"128.776273ms","start":"2024-10-07T12:09:39.640770Z","end":"2024-10-07T12:09:39.769546Z","steps":["trace[1506762392] 'process raft request'  (duration: 128.434609ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:09:39.769719Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.674267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-b6l97\" ","response":"range_response_count:1 size:4428"}
	{"level":"info","ts":"2024-10-07T12:09:39.769743Z","caller":"traceutil/trace.go:171","msg":"trace[1883260793] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-b6l97; range_end:; response_count:1; response_revision:981; }","duration":"106.705367ms","start":"2024-10-07T12:09:39.663027Z","end":"2024-10-07T12:09:39.769732Z","steps":["trace[1883260793] 'agreement among raft nodes before linearized reading'  (duration: 106.624082ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:10:04.176115Z","caller":"traceutil/trace.go:171","msg":"trace[524432788] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"229.060096ms","start":"2024-10-07T12:10:03.947034Z","end":"2024-10-07T12:10:04.176094Z","steps":["trace[524432788] 'process raft request'  (duration: 228.959339ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:10:06.842382Z","caller":"traceutil/trace.go:171","msg":"trace[1294435642] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"163.728637ms","start":"2024-10-07T12:10:06.678639Z","end":"2024-10-07T12:10:06.842368Z","steps":["trace[1294435642] 'process raft request'  (duration: 163.566042ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:18:34.317351Z","caller":"traceutil/trace.go:171","msg":"trace[1481167046] linearizableReadLoop","detail":"{readStateIndex:2119; appliedIndex:2118; }","duration":"161.987391ms","start":"2024-10-07T12:18:34.155323Z","end":"2024-10-07T12:18:34.317310Z","steps":["trace[1481167046] 'read index received'  (duration: 161.839663ms)","trace[1481167046] 'applied index is now lower than readState.Index'  (duration: 147.204µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T12:18:34.317584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.211101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:18:34.317584Z","caller":"traceutil/trace.go:171","msg":"trace[914235934] transaction","detail":"{read_only:false; response_revision:1974; number_of_response:1; }","duration":"350.849152ms","start":"2024-10-07T12:18:33.966715Z","end":"2024-10-07T12:18:34.317564Z","steps":["trace[914235934] 'process raft request'  (duration: 350.487752ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:18:34.317625Z","caller":"traceutil/trace.go:171","msg":"trace[881179841] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1974; }","duration":"162.313504ms","start":"2024-10-07T12:18:34.155300Z","end":"2024-10-07T12:18:34.317614Z","steps":["trace[881179841] 'agreement among raft nodes before linearized reading'  (duration: 162.191651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:18:34.317712Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:18:33.966696Z","time spent":"350.928413ms","remote":"127.0.0.1:46292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1969 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-07T12:18:37.162496Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1477}
	{"level":"info","ts":"2024-10-07T12:18:37.200381Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1477,"took":"37.323171ms","hash":1761905209,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3284992,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-10-07T12:18:37.200442Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1761905209,"revision":1477,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T12:19:00.195583Z","caller":"traceutil/trace.go:171","msg":"trace[666705281] linearizableReadLoop","detail":"{readStateIndex:2466; appliedIndex:2465; }","duration":"324.258174ms","start":"2024-10-07T12:18:59.871306Z","end":"2024-10-07T12:19:00.195565Z","steps":["trace[666705281] 'read index received'  (duration: 324.087547ms)","trace[666705281] 'applied index is now lower than readState.Index'  (duration: 170.004µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T12:19:00.196020Z","caller":"traceutil/trace.go:171","msg":"trace[1837867398] transaction","detail":"{read_only:false; response_revision:2310; number_of_response:1; }","duration":"348.798855ms","start":"2024-10-07T12:18:59.847206Z","end":"2024-10-07T12:19:00.196005Z","steps":["trace[1837867398] 'process raft request'  (duration: 348.244931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:19:00.196146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:18:59.847187Z","time spent":"348.888075ms","remote":"127.0.0.1:46298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4237,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211\" mod_revision:2309 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211\" value_size:4137 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211\" > >"}
	{"level":"warn","ts":"2024-10-07T12:19:00.196372Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.058044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-resizer-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:19:00.196424Z","caller":"traceutil/trace.go:171","msg":"trace[161671899] range","detail":"{range_begin:/registry/clusterroles/external-resizer-runner; range_end:; response_count:0; response_revision:2310; }","duration":"325.113051ms","start":"2024-10-07T12:18:59.871301Z","end":"2024-10-07T12:19:00.196414Z","steps":["trace[161671899] 'agreement among raft nodes before linearized reading'  (duration: 325.03141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:19:00.196452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:18:59.871261Z","time spent":"325.184062ms","remote":"127.0.0.1:46478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/external-resizer-runner\" "}
	
	
	==> kernel <==
	 12:21:04 up 13 min,  0 users,  load average: 0.53, 0.71, 0.49
	Linux addons-054971 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee] <==
	E1007 12:19:07.218416       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:08.235223       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:09.243158       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:10.249851       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:11.258544       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:12.266593       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:13.273681       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:13.487744       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:14.281336       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:15.289191       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 12:19:16.080985       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.43.169"}
	E1007 12:19:16.298977       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:17.309963       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:18.322289       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:19.331556       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:20.344372       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:21.352867       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:22.361500       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:23.368345       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:24.376297       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:25.384774       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:26.393403       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:27.401849       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:28.411603       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 12:21:03.065415       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.144.130"}
	
	
	==> kube-controller-manager [51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66] <==
	I1007 12:19:37.868783       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W1007 12:19:37.985664       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:19:37.985817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 12:19:45.904074       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-054971"
	I1007 12:19:46.001791       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1007 12:19:56.764329       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:19:56.764490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:20:02.223962       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:20:02.224043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:20:14.897114       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:20:14.897235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:20:27.134562       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:20:27.134774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:20:37.654457       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:20:37.654578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:21:00.618205       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:21:00.618314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:21:02.373327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:21:02.373364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 12:21:02.884176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.409237ms"
	I1007 12:21:02.895398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.595412ms"
	I1007 12:21:02.897295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.646µs"
	I1007 12:21:02.900546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.432µs"
	I1007 12:21:04.544293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.957471ms"
	I1007 12:21:04.546082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="59.658µs"
	
	
	==> kube-proxy [ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:08:48.836002       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:08:48.848853       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.62"]
	E1007 12:08:48.848989       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:08:48.932112       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:08:48.932173       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:08:48.932211       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:08:48.935461       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:08:48.935852       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:08:48.935881       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:08:48.939307       1 config.go:199] "Starting service config controller"
	I1007 12:08:48.939348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:08:48.939562       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:08:48.939593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:08:48.940551       1 config.go:328] "Starting node config controller"
	I1007 12:08:48.940586       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:08:49.042090       1 shared_informer.go:320] Caches are synced for node config
	I1007 12:08:49.042154       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:08:49.042187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3] <==
	W1007 12:08:39.554082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:08:39.554205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.591059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:08:39.591095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.681698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:08:39.681773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.689750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:08:39.689806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.752513       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:08:39.752623       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 12:08:39.792454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 12:08:39.792508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.824543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:08:39.824772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.905749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:08:39.905849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.979811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:08:39.979977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:40.015765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 12:08:40.015978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:40.062889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:08:40.062981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:40.095519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:08:40.095576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:08:42.603506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:20:29 addons-054971 kubelet[1213]: E1007 12:20:29.584780    1213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="25d5204e-dbd2-40d4-8608-1c35f98a64d1"
	Oct 07 12:20:31 addons-054971 kubelet[1213]: E1007 12:20:31.988365    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303631988051380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:20:31 addons-054971 kubelet[1213]: E1007 12:20:31.988418    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303631988051380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:20:33 addons-054971 kubelet[1213]: I1007 12:20:33.582637    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-crd5w" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:20:41 addons-054971 kubelet[1213]: I1007 12:20:41.586335    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:20:41 addons-054971 kubelet[1213]: E1007 12:20:41.588369    1213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="25d5204e-dbd2-40d4-8608-1c35f98a64d1"
	Oct 07 12:20:41 addons-054971 kubelet[1213]: E1007 12:20:41.598690    1213 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:20:41 addons-054971 kubelet[1213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:20:41 addons-054971 kubelet[1213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:20:41 addons-054971 kubelet[1213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:20:41 addons-054971 kubelet[1213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:20:41 addons-054971 kubelet[1213]: E1007 12:20:41.991310    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303641990140825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:20:41 addons-054971 kubelet[1213]: E1007 12:20:41.991395    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303641990140825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:20:51 addons-054971 kubelet[1213]: E1007 12:20:51.994067    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303651993537399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:20:51 addons-054971 kubelet[1213]: E1007 12:20:51.994594    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303651993537399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:20:55 addons-054971 kubelet[1213]: I1007 12:20:55.586736    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:20:55 addons-054971 kubelet[1213]: E1007 12:20:55.587985    1213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="25d5204e-dbd2-40d4-8608-1c35f98a64d1"
	Oct 07 12:21:01 addons-054971 kubelet[1213]: E1007 12:21:01.997796    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303661997365299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:21:01 addons-054971 kubelet[1213]: E1007 12:21:01.997873    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303661997365299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565281,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:21:02 addons-054971 kubelet[1213]: E1007 12:21:02.877127    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8426470d-58d8-453b-9517-150af70f0ebe" containerName="local-path-provisioner"
	Oct 07 12:21:02 addons-054971 kubelet[1213]: E1007 12:21:02.877258    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b62cdea8-c260-42a3-a205-70cfc3bb1bf6" containerName="headlamp"
	Oct 07 12:21:02 addons-054971 kubelet[1213]: I1007 12:21:02.877339    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62cdea8-c260-42a3-a205-70cfc3bb1bf6" containerName="headlamp"
	Oct 07 12:21:02 addons-054971 kubelet[1213]: I1007 12:21:02.877380    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="8426470d-58d8-453b-9517-150af70f0ebe" containerName="local-path-provisioner"
	Oct 07 12:21:02 addons-054971 kubelet[1213]: I1007 12:21:02.950551    1213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4kj\" (UniqueName: \"kubernetes.io/projected/5740457f-53c5-4243-9e12-c18af2dffe4b-kube-api-access-2t4kj\") pod \"hello-world-app-55bf9c44b4-s89lv\" (UID: \"5740457f-53c5-4243-9e12-c18af2dffe4b\") " pod="default/hello-world-app-55bf9c44b4-s89lv"
	Oct 07 12:21:04 addons-054971 kubelet[1213]: I1007 12:21:04.531756    1213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-s89lv" podStartSLOduration=1.7384532259999999 podStartE2EDuration="2.531735139s" podCreationTimestamp="2024-10-07 12:21:02 +0000 UTC" firstStartedPulling="2024-10-07 12:21:03.512313407 +0000 UTC m=+742.081871448" lastFinishedPulling="2024-10-07 12:21:04.30559532 +0000 UTC m=+742.875153361" observedRunningTime="2024-10-07 12:21:04.531608323 +0000 UTC m=+743.101166372" watchObservedRunningTime="2024-10-07 12:21:04.531735139 +0000 UTC m=+743.101293187"
	
	
	==> storage-provisioner [6290bd3b1143e9ce9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947] <==
	I1007 12:08:53.320716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 12:08:53.349407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 12:08:53.349488       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 12:08:53.441191       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 12:08:53.441391       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-054971_1768f3c0-2985-4ba7-9c01-071c079b3114!
	I1007 12:08:53.443370       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11aad06b-f705-4064-939b-c915d161912b", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-054971_1768f3c0-2985-4ba7-9c01-071c079b3114 became leader
	I1007 12:08:53.547214       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-054971_1768f3c0-2985-4ba7-9c01-071c079b3114!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-054971 -n addons-054971
helpers_test.go:261: (dbg) Run:  kubectl --context addons-054971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-29nsm ingress-nginx-admission-patch-b6l97
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-054971 describe pod busybox ingress-nginx-admission-create-29nsm ingress-nginx-admission-patch-b6l97
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-054971 describe pod busybox ingress-nginx-admission-create-29nsm ingress-nginx-admission-patch-b6l97: exit status 1 (72.337738ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-054971/192.168.39.62
	Start Time:       Mon, 07 Oct 2024 12:10:13 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sbklt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sbklt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                           Age                   From               Message
	  ----     ------                           ----                  ----               -------
	  Normal   Scheduled                        10m                   default-scheduler  Successfully assigned default/busybox to addons-054971
	  Normal   Pulling                          9m31s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed                           9m31s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed                           9m31s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed                           9m8s (x6 over 10m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff                          5m37s (x21 over 10m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  FailedToRetrieveImagePullSecret  50s (x9 over 2m35s)   kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-29nsm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b6l97" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-054971 describe pod busybox ingress-nginx-admission-create-29nsm ingress-nginx-admission-patch-b6l97: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable ingress-dns --alsologtostderr -v=1: (1.149540632s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable ingress --alsologtostderr -v=1: (7.73643312s)
--- FAIL: TestAddons/parallel/Ingress (152.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (321.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I1007 12:18:25.177628  754324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:394: metrics-server stabilized in 3.91986ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-hglsg" [bc1b53d0-93d0-4734-bfbe-9b7172391a6d] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005736572s
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (73.001034ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 9m44.258414446s

                                                
                                                
** /stderr **
I1007 12:18:30.261267  754324 retry.go:31] will retry after 1.716655992s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (100.603729ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 9m46.076165324s

                                                
                                                
** /stderr **
I1007 12:18:32.079862  754324 retry.go:31] will retry after 6.64880158s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (73.716907ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 9m52.800749845s

                                                
                                                
** /stderr **
I1007 12:18:38.803579  754324 retry.go:31] will retry after 10.082889854s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (76.146885ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 10m2.96061526s

                                                
                                                
** /stderr **
I1007 12:18:48.963319  754324 retry.go:31] will retry after 11.55757463s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (72.73943ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 10m14.593398877s

                                                
                                                
** /stderr **
I1007 12:19:00.596150  754324 retry.go:31] will retry after 7.962875412s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (76.951198ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 10m22.634481356s

                                                
                                                
** /stderr **
I1007 12:19:08.637049  754324 retry.go:31] will retry after 29.453497105s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (74.414305ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 10m52.162999825s

                                                
                                                
** /stderr **
I1007 12:19:38.165596  754324 retry.go:31] will retry after 22.932607755s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (69.753125ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 11m15.1689901s

                                                
                                                
** /stderr **
I1007 12:20:01.171484  754324 retry.go:31] will retry after 1m2.138615208s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (77.004018ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 12m17.385709113s

                                                
                                                
** /stderr **
I1007 12:21:03.388287  754324 retry.go:31] will retry after 53.281457669s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (72.67049ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 13m10.742407379s

                                                
                                                
** /stderr **
I1007 12:21:56.745291  754324 retry.go:31] will retry after 56.912442509s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (67.106749ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 14m7.728104345s

                                                
                                                
** /stderr **
I1007 12:22:53.731471  754324 retry.go:31] will retry after 50.34903848s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-054971 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-054971 top pods -n kube-system: exit status 1 (65.735419ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-crd5w, age: 14m58.143974717s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-054971 -n addons-054971
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 logs -n 25: (1.267826793s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-096310                                                                     | download-only-096310 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| delete  | -p download-only-478522                                                                     | download-only-478522 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-969518 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | binary-mirror-969518                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40857                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-969518                                                                     | binary-mirror-969518 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| addons  | enable dashboard -p                                                                         | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | addons-054971                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | addons-054971                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-054971 --wait=true                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:10 UTC | 07 Oct 24 12:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-054971 ip                                                                            | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-054971 ssh curl -s                                                                   | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-054971 ssh cat                                                                       | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:18 UTC |
	|         | /opt/local-path-provisioner/pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:18 UTC | 07 Oct 24 12:19 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | -p addons-054971                                                                            |                      |         |         |                     |                     |
	| addons  | addons-054971 addons                                                                        | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | -p addons-054971                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:19 UTC | 07 Oct 24 12:19 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-054971 ip                                                                            | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:21 UTC | 07 Oct 24 12:21 UTC |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:21 UTC | 07 Oct 24 12:21 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-054971 addons disable                                                                | addons-054971        | jenkins | v1.34.0 | 07 Oct 24 12:21 UTC | 07 Oct 24 12:21 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:07:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:07:59.572642  754935 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:07:59.572806  754935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:59.572819  754935 out.go:358] Setting ErrFile to fd 2...
	I1007 12:07:59.572826  754935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:59.573017  754935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:07:59.573657  754935 out.go:352] Setting JSON to false
	I1007 12:07:59.574654  754935 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6629,"bootTime":1728296251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:07:59.574784  754935 start.go:139] virtualization: kvm guest
	I1007 12:07:59.577043  754935 out.go:177] * [addons-054971] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:07:59.578505  754935 notify.go:220] Checking for updates...
	I1007 12:07:59.578532  754935 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:07:59.580178  754935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:07:59.581611  754935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:07:59.582882  754935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:07:59.584387  754935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:07:59.585594  754935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:07:59.587047  754935 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:07:59.622808  754935 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:07:59.624975  754935 start.go:297] selected driver: kvm2
	I1007 12:07:59.625004  754935 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:07:59.625038  754935 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:07:59.625817  754935 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:07:59.625911  754935 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:07:59.642331  754935 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:07:59.642394  754935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:07:59.642668  754935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:07:59.642704  754935 cni.go:84] Creating CNI manager for ""
	I1007 12:07:59.642757  754935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:07:59.642785  754935 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 12:07:59.642857  754935 start.go:340] cluster config:
	{Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:59.642973  754935 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:07:59.645394  754935 out.go:177] * Starting "addons-054971" primary control-plane node in "addons-054971" cluster
	I1007 12:07:59.647155  754935 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:59.647239  754935 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:07:59.647253  754935 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:59.647371  754935 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:07:59.647386  754935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:07:59.647752  754935 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/config.json ...
	I1007 12:07:59.647782  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/config.json: {Name:mka4931e420d409240060afe28d91b99168dee52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:59.647962  754935 start.go:360] acquireMachinesLock for addons-054971: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:07:59.648043  754935 start.go:364] duration metric: took 60.101µs to acquireMachinesLock for "addons-054971"
	I1007 12:07:59.648073  754935 start.go:93] Provisioning new machine with config: &{Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:59.648138  754935 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:07:59.650270  754935 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 12:07:59.650444  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:59.650514  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:59.665985  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I1007 12:07:59.666530  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:59.667183  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:07:59.667229  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:59.667719  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:59.667995  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:07:59.668183  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:07:59.668424  754935 start.go:159] libmachine.API.Create for "addons-054971" (driver="kvm2")
	I1007 12:07:59.668463  754935 client.go:168] LocalClient.Create starting
	I1007 12:07:59.668515  754935 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:07:59.806192  754935 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:08:00.082840  754935 main.go:141] libmachine: Running pre-create checks...
	I1007 12:08:00.082875  754935 main.go:141] libmachine: (addons-054971) Calling .PreCreateCheck
	I1007 12:08:00.083462  754935 main.go:141] libmachine: (addons-054971) Calling .GetConfigRaw
	I1007 12:08:00.083962  754935 main.go:141] libmachine: Creating machine...
	I1007 12:08:00.083991  754935 main.go:141] libmachine: (addons-054971) Calling .Create
	I1007 12:08:00.084174  754935 main.go:141] libmachine: (addons-054971) Creating KVM machine...
	I1007 12:08:00.085613  754935 main.go:141] libmachine: (addons-054971) DBG | found existing default KVM network
	I1007 12:08:00.086673  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.086483  754957 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I1007 12:08:00.086751  754935 main.go:141] libmachine: (addons-054971) DBG | created network xml: 
	I1007 12:08:00.086777  754935 main.go:141] libmachine: (addons-054971) DBG | <network>
	I1007 12:08:00.086791  754935 main.go:141] libmachine: (addons-054971) DBG |   <name>mk-addons-054971</name>
	I1007 12:08:00.086804  754935 main.go:141] libmachine: (addons-054971) DBG |   <dns enable='no'/>
	I1007 12:08:00.086816  754935 main.go:141] libmachine: (addons-054971) DBG |   
	I1007 12:08:00.086831  754935 main.go:141] libmachine: (addons-054971) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:08:00.086846  754935 main.go:141] libmachine: (addons-054971) DBG |     <dhcp>
	I1007 12:08:00.086855  754935 main.go:141] libmachine: (addons-054971) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:08:00.086861  754935 main.go:141] libmachine: (addons-054971) DBG |     </dhcp>
	I1007 12:08:00.086868  754935 main.go:141] libmachine: (addons-054971) DBG |   </ip>
	I1007 12:08:00.086873  754935 main.go:141] libmachine: (addons-054971) DBG |   
	I1007 12:08:00.086879  754935 main.go:141] libmachine: (addons-054971) DBG | </network>
	I1007 12:08:00.086889  754935 main.go:141] libmachine: (addons-054971) DBG | 
	I1007 12:08:00.092680  754935 main.go:141] libmachine: (addons-054971) DBG | trying to create private KVM network mk-addons-054971 192.168.39.0/24...
	I1007 12:08:00.164246  754935 main.go:141] libmachine: (addons-054971) DBG | private KVM network mk-addons-054971 192.168.39.0/24 created
	I1007 12:08:00.164284  754935 main.go:141] libmachine: (addons-054971) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971 ...
	I1007 12:08:00.164310  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.164175  754957 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:08:00.164328  754935 main.go:141] libmachine: (addons-054971) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:08:00.164348  754935 main.go:141] libmachine: (addons-054971) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:08:00.437829  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.437643  754957 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa...
	I1007 12:08:00.654995  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.654793  754957 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/addons-054971.rawdisk...
	I1007 12:08:00.655033  754935 main.go:141] libmachine: (addons-054971) DBG | Writing magic tar header
	I1007 12:08:00.655050  754935 main.go:141] libmachine: (addons-054971) DBG | Writing SSH key tar header
	I1007 12:08:00.655061  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:00.654922  754957 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971 ...
	I1007 12:08:00.655075  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971
	I1007 12:08:00.655082  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:08:00.655091  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:08:00.655097  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:08:00.655107  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:08:00.655116  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:08:00.655126  754935 main.go:141] libmachine: (addons-054971) DBG | Checking permissions on dir: /home
	I1007 12:08:00.655137  754935 main.go:141] libmachine: (addons-054971) DBG | Skipping /home - not owner
	I1007 12:08:00.655153  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971 (perms=drwx------)
	I1007 12:08:00.655162  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:08:00.655172  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:08:00.655184  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:08:00.655217  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:08:00.655237  754935 main.go:141] libmachine: (addons-054971) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:08:00.655246  754935 main.go:141] libmachine: (addons-054971) Creating domain...
	I1007 12:08:00.656395  754935 main.go:141] libmachine: (addons-054971) define libvirt domain using xml: 
	I1007 12:08:00.656430  754935 main.go:141] libmachine: (addons-054971) <domain type='kvm'>
	I1007 12:08:00.656439  754935 main.go:141] libmachine: (addons-054971)   <name>addons-054971</name>
	I1007 12:08:00.656445  754935 main.go:141] libmachine: (addons-054971)   <memory unit='MiB'>4000</memory>
	I1007 12:08:00.656451  754935 main.go:141] libmachine: (addons-054971)   <vcpu>2</vcpu>
	I1007 12:08:00.656455  754935 main.go:141] libmachine: (addons-054971)   <features>
	I1007 12:08:00.656460  754935 main.go:141] libmachine: (addons-054971)     <acpi/>
	I1007 12:08:00.656466  754935 main.go:141] libmachine: (addons-054971)     <apic/>
	I1007 12:08:00.656471  754935 main.go:141] libmachine: (addons-054971)     <pae/>
	I1007 12:08:00.656481  754935 main.go:141] libmachine: (addons-054971)     
	I1007 12:08:00.656486  754935 main.go:141] libmachine: (addons-054971)   </features>
	I1007 12:08:00.656496  754935 main.go:141] libmachine: (addons-054971)   <cpu mode='host-passthrough'>
	I1007 12:08:00.656534  754935 main.go:141] libmachine: (addons-054971)   
	I1007 12:08:00.656562  754935 main.go:141] libmachine: (addons-054971)   </cpu>
	I1007 12:08:00.656586  754935 main.go:141] libmachine: (addons-054971)   <os>
	I1007 12:08:00.656603  754935 main.go:141] libmachine: (addons-054971)     <type>hvm</type>
	I1007 12:08:00.656610  754935 main.go:141] libmachine: (addons-054971)     <boot dev='cdrom'/>
	I1007 12:08:00.656615  754935 main.go:141] libmachine: (addons-054971)     <boot dev='hd'/>
	I1007 12:08:00.656621  754935 main.go:141] libmachine: (addons-054971)     <bootmenu enable='no'/>
	I1007 12:08:00.656627  754935 main.go:141] libmachine: (addons-054971)   </os>
	I1007 12:08:00.656632  754935 main.go:141] libmachine: (addons-054971)   <devices>
	I1007 12:08:00.656638  754935 main.go:141] libmachine: (addons-054971)     <disk type='file' device='cdrom'>
	I1007 12:08:00.656646  754935 main.go:141] libmachine: (addons-054971)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/boot2docker.iso'/>
	I1007 12:08:00.656653  754935 main.go:141] libmachine: (addons-054971)       <target dev='hdc' bus='scsi'/>
	I1007 12:08:00.656658  754935 main.go:141] libmachine: (addons-054971)       <readonly/>
	I1007 12:08:00.656665  754935 main.go:141] libmachine: (addons-054971)     </disk>
	I1007 12:08:00.656674  754935 main.go:141] libmachine: (addons-054971)     <disk type='file' device='disk'>
	I1007 12:08:00.656686  754935 main.go:141] libmachine: (addons-054971)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:08:00.656695  754935 main.go:141] libmachine: (addons-054971)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/addons-054971.rawdisk'/>
	I1007 12:08:00.656702  754935 main.go:141] libmachine: (addons-054971)       <target dev='hda' bus='virtio'/>
	I1007 12:08:00.656706  754935 main.go:141] libmachine: (addons-054971)     </disk>
	I1007 12:08:00.656713  754935 main.go:141] libmachine: (addons-054971)     <interface type='network'>
	I1007 12:08:00.656734  754935 main.go:141] libmachine: (addons-054971)       <source network='mk-addons-054971'/>
	I1007 12:08:00.656741  754935 main.go:141] libmachine: (addons-054971)       <model type='virtio'/>
	I1007 12:08:00.656747  754935 main.go:141] libmachine: (addons-054971)     </interface>
	I1007 12:08:00.656755  754935 main.go:141] libmachine: (addons-054971)     <interface type='network'>
	I1007 12:08:00.656771  754935 main.go:141] libmachine: (addons-054971)       <source network='default'/>
	I1007 12:08:00.656787  754935 main.go:141] libmachine: (addons-054971)       <model type='virtio'/>
	I1007 12:08:00.656801  754935 main.go:141] libmachine: (addons-054971)     </interface>
	I1007 12:08:00.656817  754935 main.go:141] libmachine: (addons-054971)     <serial type='pty'>
	I1007 12:08:00.656829  754935 main.go:141] libmachine: (addons-054971)       <target port='0'/>
	I1007 12:08:00.656838  754935 main.go:141] libmachine: (addons-054971)     </serial>
	I1007 12:08:00.656858  754935 main.go:141] libmachine: (addons-054971)     <console type='pty'>
	I1007 12:08:00.656866  754935 main.go:141] libmachine: (addons-054971)       <target type='serial' port='0'/>
	I1007 12:08:00.656871  754935 main.go:141] libmachine: (addons-054971)     </console>
	I1007 12:08:00.656875  754935 main.go:141] libmachine: (addons-054971)     <rng model='virtio'>
	I1007 12:08:00.656884  754935 main.go:141] libmachine: (addons-054971)       <backend model='random'>/dev/random</backend>
	I1007 12:08:00.656889  754935 main.go:141] libmachine: (addons-054971)     </rng>
	I1007 12:08:00.656894  754935 main.go:141] libmachine: (addons-054971)     
	I1007 12:08:00.656900  754935 main.go:141] libmachine: (addons-054971)     
	I1007 12:08:00.656913  754935 main.go:141] libmachine: (addons-054971)   </devices>
	I1007 12:08:00.656925  754935 main.go:141] libmachine: (addons-054971) </domain>
	I1007 12:08:00.656940  754935 main.go:141] libmachine: (addons-054971) 
	I1007 12:08:00.663302  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:f5:15:6e in network default
	I1007 12:08:00.663783  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:00.663812  754935 main.go:141] libmachine: (addons-054971) Ensuring networks are active...
	I1007 12:08:00.664547  754935 main.go:141] libmachine: (addons-054971) Ensuring network default is active
	I1007 12:08:00.664921  754935 main.go:141] libmachine: (addons-054971) Ensuring network mk-addons-054971 is active
	I1007 12:08:00.665479  754935 main.go:141] libmachine: (addons-054971) Getting domain xml...
	I1007 12:08:00.666246  754935 main.go:141] libmachine: (addons-054971) Creating domain...
	I1007 12:08:01.210389  754935 main.go:141] libmachine: (addons-054971) Waiting to get IP...
	I1007 12:08:01.211322  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:01.211807  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:01.211832  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:01.211782  754957 retry.go:31] will retry after 302.532395ms: waiting for machine to come up
	I1007 12:08:01.516145  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:01.516598  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:01.516671  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:01.516534  754957 retry.go:31] will retry after 235.273407ms: waiting for machine to come up
	I1007 12:08:01.752903  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:01.753322  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:01.753352  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:01.753281  754957 retry.go:31] will retry after 339.470407ms: waiting for machine to come up
	I1007 12:08:02.095125  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:02.095554  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:02.095586  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:02.095501  754957 retry.go:31] will retry after 563.14845ms: waiting for machine to come up
	I1007 12:08:02.660208  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:02.660689  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:02.660715  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:02.660627  754957 retry.go:31] will retry after 525.569187ms: waiting for machine to come up
	I1007 12:08:03.187514  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:03.188033  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:03.188059  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:03.187980  754957 retry.go:31] will retry after 625.441425ms: waiting for machine to come up
	I1007 12:08:03.814765  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:03.815125  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:03.815148  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:03.815093  754957 retry.go:31] will retry after 741.448412ms: waiting for machine to come up
	I1007 12:08:04.558071  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:04.558559  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:04.558583  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:04.558499  754957 retry.go:31] will retry after 1.166707702s: waiting for machine to come up
	I1007 12:08:05.727215  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:05.728021  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:05.728067  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:05.727899  754957 retry.go:31] will retry after 1.558030288s: waiting for machine to come up
	I1007 12:08:07.287788  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:07.288772  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:07.289184  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:07.288566  754957 retry.go:31] will retry after 2.291932799s: waiting for machine to come up
	I1007 12:08:09.583293  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:09.583766  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:09.583885  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:09.583815  754957 retry.go:31] will retry after 2.102395553s: waiting for machine to come up
	I1007 12:08:11.688800  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:11.689284  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:11.689303  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:11.689222  754957 retry.go:31] will retry after 2.844478116s: waiting for machine to come up
	I1007 12:08:14.537542  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:14.537949  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:14.537968  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:14.537895  754957 retry.go:31] will retry after 4.101176697s: waiting for machine to come up
	I1007 12:08:18.644021  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:18.644418  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find current IP address of domain addons-054971 in network mk-addons-054971
	I1007 12:08:18.644444  754935 main.go:141] libmachine: (addons-054971) DBG | I1007 12:08:18.644366  754957 retry.go:31] will retry after 3.901511536s: waiting for machine to come up
	I1007 12:08:22.549411  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.550012  754935 main.go:141] libmachine: (addons-054971) Found IP for machine: 192.168.39.62
	I1007 12:08:22.550071  754935 main.go:141] libmachine: (addons-054971) Reserving static IP address...
	I1007 12:08:22.550089  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has current primary IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.550441  754935 main.go:141] libmachine: (addons-054971) DBG | unable to find host DHCP lease matching {name: "addons-054971", mac: "52:54:00:06:35:95", ip: "192.168.39.62"} in network mk-addons-054971
	I1007 12:08:22.741070  754935 main.go:141] libmachine: (addons-054971) DBG | Getting to WaitForSSH function...
	I1007 12:08:22.741107  754935 main.go:141] libmachine: (addons-054971) Reserved static IP address: 192.168.39.62
	I1007 12:08:22.741120  754935 main.go:141] libmachine: (addons-054971) Waiting for SSH to be available...
	I1007 12:08:22.743956  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.744432  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:35:95}
	I1007 12:08:22.744480  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.744644  754935 main.go:141] libmachine: (addons-054971) DBG | Using SSH client type: external
	I1007 12:08:22.744670  754935 main.go:141] libmachine: (addons-054971) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa (-rw-------)
	I1007 12:08:22.744700  754935 main.go:141] libmachine: (addons-054971) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:08:22.744713  754935 main.go:141] libmachine: (addons-054971) DBG | About to run SSH command:
	I1007 12:08:22.744725  754935 main.go:141] libmachine: (addons-054971) DBG | exit 0
	I1007 12:08:22.874506  754935 main.go:141] libmachine: (addons-054971) DBG | SSH cmd err, output: <nil>: 
	I1007 12:08:22.874685  754935 main.go:141] libmachine: (addons-054971) KVM machine creation complete!
	I1007 12:08:22.875347  754935 main.go:141] libmachine: (addons-054971) Calling .GetConfigRaw
	I1007 12:08:22.908849  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:22.909495  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:22.909843  754935 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:08:22.909870  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:22.911317  754935 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:08:22.911338  754935 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:08:22.911344  754935 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:08:22.911350  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:22.914176  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.914698  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:35:95}
	I1007 12:08:22.914747  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:22.914990  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:22.915265  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:22.915477  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:22.915678  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:22.915890  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:22.916127  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:22.916142  754935 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:08:23.029880  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:08:23.029908  754935 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:08:23.029916  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.033178  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.033588  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.033612  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.033819  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.034077  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.034262  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.034431  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.034592  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:23.034801  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:23.034815  754935 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:08:23.151341  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:08:23.151412  754935 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:08:23.151419  754935 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:08:23.151430  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:08:23.151732  754935 buildroot.go:166] provisioning hostname "addons-054971"
	I1007 12:08:23.151768  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:08:23.151990  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.154694  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.155012  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.155052  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.155246  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.155430  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.155588  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.155729  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.155898  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:23.156077  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:23.156090  754935 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-054971 && echo "addons-054971" | sudo tee /etc/hostname
	I1007 12:08:23.285340  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-054971
	
	I1007 12:08:23.285375  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.288360  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.288768  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.288798  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.288999  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.289211  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.289383  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.289526  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.289704  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:23.289895  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:23.289910  754935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-054971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-054971/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-054971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:08:23.416271  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:08:23.416312  754935 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:08:23.416380  754935 buildroot.go:174] setting up certificates
	I1007 12:08:23.416404  754935 provision.go:84] configureAuth start
	I1007 12:08:23.416427  754935 main.go:141] libmachine: (addons-054971) Calling .GetMachineName
	I1007 12:08:23.416841  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:23.419388  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.419711  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.419742  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.419875  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.422119  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.422421  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.422449  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.422580  754935 provision.go:143] copyHostCerts
	I1007 12:08:23.422691  754935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:08:23.422857  754935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:08:23.422947  754935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:08:23.423029  754935 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.addons-054971 san=[127.0.0.1 192.168.39.62 addons-054971 localhost minikube]
	I1007 12:08:23.850763  754935 provision.go:177] copyRemoteCerts
	I1007 12:08:23.850838  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:08:23.850865  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:23.853646  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.854185  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:23.854220  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:23.854413  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:23.854607  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:23.854752  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:23.855039  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:23.941420  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:08:23.969069  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:08:23.995784  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:08:24.021483  754935 provision.go:87] duration metric: took 605.054524ms to configureAuth
	I1007 12:08:24.021519  754935 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:08:24.021712  754935 config.go:182] Loaded profile config "addons-054971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:24.021794  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.024445  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.024732  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.024752  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.024944  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.025142  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.025329  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.025502  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.025658  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:24.025871  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:24.025887  754935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:08:24.266440  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:08:24.266472  754935 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:08:24.266482  754935 main.go:141] libmachine: (addons-054971) Calling .GetURL
	I1007 12:08:24.268085  754935 main.go:141] libmachine: (addons-054971) DBG | Using libvirt version 6000000
	I1007 12:08:24.270308  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.270671  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.270702  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.270899  754935 main.go:141] libmachine: Docker is up and running!
	I1007 12:08:24.270913  754935 main.go:141] libmachine: Reticulating splines...
	I1007 12:08:24.270921  754935 client.go:171] duration metric: took 24.602447605s to LocalClient.Create
	I1007 12:08:24.270945  754935 start.go:167] duration metric: took 24.602524604s to libmachine.API.Create "addons-054971"
	I1007 12:08:24.270965  754935 start.go:293] postStartSetup for "addons-054971" (driver="kvm2")
	I1007 12:08:24.270977  754935 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:08:24.270995  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.271292  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:08:24.271322  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.273234  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.273548  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.273574  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.273712  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.273887  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.274077  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.274209  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:24.360828  754935 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:08:24.365389  754935 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:08:24.365445  754935 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:08:24.365532  754935 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:08:24.365567  754935 start.go:296] duration metric: took 94.594256ms for postStartSetup
	I1007 12:08:24.365620  754935 main.go:141] libmachine: (addons-054971) Calling .GetConfigRaw
	I1007 12:08:24.366234  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:24.369106  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.369474  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.369502  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.369750  754935 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/config.json ...
	I1007 12:08:24.369975  754935 start.go:128] duration metric: took 24.72182471s to createHost
	I1007 12:08:24.370004  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.372113  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.372404  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.372443  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.372589  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.372781  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.372944  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.373081  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.373250  754935 main.go:141] libmachine: Using SSH client type: native
	I1007 12:08:24.373420  754935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1007 12:08:24.373430  754935 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:08:24.487069  754935 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302904.462104795
	
	I1007 12:08:24.487097  754935 fix.go:216] guest clock: 1728302904.462104795
	I1007 12:08:24.487105  754935 fix.go:229] Guest: 2024-10-07 12:08:24.462104795 +0000 UTC Remote: 2024-10-07 12:08:24.369989566 +0000 UTC m=+24.839624309 (delta=92.115229ms)
	I1007 12:08:24.487154  754935 fix.go:200] guest clock delta is within tolerance: 92.115229ms
	I1007 12:08:24.487164  754935 start.go:83] releasing machines lock for "addons-054971", held for 24.839104324s
	I1007 12:08:24.487194  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.487488  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:24.490137  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.490612  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.490640  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.490816  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.491321  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.491483  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:24.491592  754935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:08:24.491649  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.491796  754935 ssh_runner.go:195] Run: cat /version.json
	I1007 12:08:24.491831  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:24.494734  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495061  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.495087  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495107  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495305  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.495530  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.495609  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:24.495634  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:24.495696  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.495771  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:24.495843  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:24.495881  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:24.496097  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:24.496285  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:24.607711  754935 ssh_runner.go:195] Run: systemctl --version
	I1007 12:08:24.613966  754935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:08:24.774833  754935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:08:24.781653  754935 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:08:24.781735  754935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:08:24.799429  754935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:08:24.799461  754935 start.go:495] detecting cgroup driver to use...
	I1007 12:08:24.799550  754935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:08:24.816749  754935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:08:24.832373  754935 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:08:24.832448  754935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:08:24.847340  754935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:08:24.862121  754935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:08:24.974702  754935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:08:25.133182  754935 docker.go:233] disabling docker service ...
	I1007 12:08:25.133259  754935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:08:25.148190  754935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:08:25.161503  754935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:08:25.302582  754935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:08:25.415236  754935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:08:25.430690  754935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:08:25.450234  754935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:08:25.450304  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.461363  754935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:08:25.461533  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.472443  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.483633  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.494682  754935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:08:25.505823  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.517153  754935 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.536034  754935 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:08:25.547258  754935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:08:25.557100  754935 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:08:25.557175  754935 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:08:25.571038  754935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:08:25.581065  754935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:25.702234  754935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:08:25.796548  754935 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:08:25.796660  754935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:08:25.801839  754935 start.go:563] Will wait 60s for crictl version
	I1007 12:08:25.801921  754935 ssh_runner.go:195] Run: which crictl
	I1007 12:08:25.806239  754935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:08:25.850119  754935 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:08:25.850233  754935 ssh_runner.go:195] Run: crio --version
	I1007 12:08:25.882752  754935 ssh_runner.go:195] Run: crio --version
	I1007 12:08:25.913822  754935 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:08:25.915342  754935 main.go:141] libmachine: (addons-054971) Calling .GetIP
	I1007 12:08:25.918204  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:25.918593  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:25.918625  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:25.918910  754935 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:08:25.923594  754935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:25.937482  754935 kubeadm.go:883] updating cluster {Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:08:25.937608  754935 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:08:25.937653  754935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:08:25.973328  754935 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:08:25.973400  754935 ssh_runner.go:195] Run: which lz4
	I1007 12:08:25.977586  754935 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:08:25.981791  754935 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:08:25.981853  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:08:27.380108  754935 crio.go:462] duration metric: took 1.402551401s to copy over tarball
	I1007 12:08:27.380215  754935 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:08:29.599799  754935 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219548429s)
	I1007 12:08:29.599842  754935 crio.go:469] duration metric: took 2.219698523s to extract the tarball
	I1007 12:08:29.599852  754935 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:08:29.639177  754935 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:08:29.685454  754935 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:08:29.685490  754935 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:08:29.685501  754935 kubeadm.go:934] updating node { 192.168.39.62 8443 v1.31.1 crio true true} ...
	I1007 12:08:29.685632  754935 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-054971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:08:29.685731  754935 ssh_runner.go:195] Run: crio config
	I1007 12:08:29.740722  754935 cni.go:84] Creating CNI manager for ""
	I1007 12:08:29.740750  754935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:08:29.740762  754935 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:08:29.740784  754935 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-054971 NodeName:addons-054971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:08:29.740945  754935 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-054971"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:08:29.741024  754935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:29.752821  754935 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:08:29.752909  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:08:29.764740  754935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 12:08:29.783575  754935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:08:29.802470  754935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1007 12:08:29.820581  754935 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I1007 12:08:29.825059  754935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:29.839011  754935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:29.978306  754935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:29.996730  754935 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971 for IP: 192.168.39.62
	I1007 12:08:29.996769  754935 certs.go:194] generating shared ca certs ...
	I1007 12:08:29.996789  754935 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:29.996986  754935 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:08:30.125391  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt ...
	I1007 12:08:30.125430  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt: {Name:mkf38bf1f27b36c5a90d408329bd80f1d68bbecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.125621  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key ...
	I1007 12:08:30.125632  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key: {Name:mk168e4f92eadd0196eca20db6f9ccfcf5db1621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.125715  754935 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:08:30.305758  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt ...
	I1007 12:08:30.305792  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt: {Name:mk56fe9616efe3c3bc3e1ceda5b49e5b20b43e6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.305969  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key ...
	I1007 12:08:30.305980  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key: {Name:mk47f918245deed16906815c0d30c35fb7007064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.306077  754935 certs.go:256] generating profile certs ...
	I1007 12:08:30.306148  754935 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.key
	I1007 12:08:30.306163  754935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt with IP's: []
	I1007 12:08:30.633236  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt ...
	I1007 12:08:30.633273  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: {Name:mk2af063631c68299ee0f188c8248df6f07e8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.633453  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.key ...
	I1007 12:08:30.633464  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.key: {Name:mkfe775678202cd58fcf06ea7b26ad5560d3a483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.633532  754935 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073
	I1007 12:08:30.633551  754935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.62]
	I1007 12:08:30.889463  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073 ...
	I1007 12:08:30.889499  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073: {Name:mk9ac46c8c2cfd9cc90be39a3d6acc574fb18e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.889675  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073 ...
	I1007 12:08:30.889688  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073: {Name:mkdc25b5cb3b50e13806fd559153de2005948061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.889761  754935 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt.62eff073 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt
	I1007 12:08:30.889860  754935 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key.62eff073 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key
	I1007 12:08:30.889912  754935 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key
	I1007 12:08:30.889931  754935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt with IP's: []
	I1007 12:08:30.983559  754935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt ...
	I1007 12:08:30.983594  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt: {Name:mkad2e0848c9219bce5e94cbee1000568da3bb8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.983782  754935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key ...
	I1007 12:08:30.983796  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key: {Name:mk7e53181e8b50324583479ecc40043bfdc3782e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:30.983965  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:08:30.984002  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:08:30.984024  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:08:30.984048  754935 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:08:30.984838  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:08:31.018116  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:08:31.045859  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:08:31.074096  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:08:31.101386  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 12:08:31.127326  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:08:31.152967  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:08:31.183365  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 12:08:31.211557  754935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:08:31.238242  754935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:08:31.257072  754935 ssh_runner.go:195] Run: openssl version
	I1007 12:08:31.263620  754935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:08:31.275430  754935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:31.280847  754935 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:31.280927  754935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:31.287409  754935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:08:31.299175  754935 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:08:31.303993  754935 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:08:31.304061  754935 kubeadm.go:392] StartCluster: {Name:addons-054971 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-054971 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:08:31.304158  754935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:08:31.304227  754935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:08:31.344425  754935 cri.go:89] found id: ""
	I1007 12:08:31.344513  754935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:08:31.355234  754935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:08:31.367359  754935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:08:31.377623  754935 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:08:31.377649  754935 kubeadm.go:157] found existing configuration files:
	
	I1007 12:08:31.377706  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:08:31.387122  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:08:31.387188  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:08:31.397541  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:08:31.410453  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:08:31.410607  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:08:31.421318  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:08:31.431331  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:08:31.431395  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:08:31.441975  754935 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:08:31.452515  754935 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:08:31.452580  754935 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:08:31.462994  754935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:08:31.521057  754935 kubeadm.go:310] W1007 12:08:31.504124     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:08:31.521747  754935 kubeadm.go:310] W1007 12:08:31.505103     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:08:31.648302  754935 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:08:42.239304  754935 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:08:42.239394  754935 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:08:42.239516  754935 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:08:42.239664  754935 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:08:42.239780  754935 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:08:42.239840  754935 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:08:42.241259  754935 out.go:235]   - Generating certificates and keys ...
	I1007 12:08:42.241353  754935 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:08:42.241425  754935 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:08:42.241497  754935 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:08:42.241550  754935 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:08:42.241601  754935 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:08:42.241653  754935 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:08:42.241699  754935 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:08:42.241819  754935 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-054971 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1007 12:08:42.241914  754935 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:08:42.242108  754935 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-054971 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1007 12:08:42.242192  754935 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:08:42.242275  754935 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:08:42.242337  754935 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:08:42.242427  754935 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:08:42.242499  754935 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:08:42.242553  754935 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:08:42.242616  754935 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:08:42.242689  754935 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:08:42.242763  754935 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:08:42.242852  754935 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:08:42.242949  754935 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:08:42.244275  754935 out.go:235]   - Booting up control plane ...
	I1007 12:08:42.244365  754935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:08:42.244436  754935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:08:42.244531  754935 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:08:42.244703  754935 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:08:42.244792  754935 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:08:42.244826  754935 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:08:42.244940  754935 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:08:42.245051  754935 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:08:42.245142  754935 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001601827s
	I1007 12:08:42.245240  754935 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:08:42.245316  754935 kubeadm.go:310] [api-check] The API server is healthy after 5.503726284s
	I1007 12:08:42.245443  754935 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:08:42.245592  754935 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:08:42.245666  754935 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:08:42.245971  754935 kubeadm.go:310] [mark-control-plane] Marking the node addons-054971 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:08:42.246075  754935 kubeadm.go:310] [bootstrap-token] Using token: hpfhac.k0ed3mhw422jku3i
	I1007 12:08:42.247396  754935 out.go:235]   - Configuring RBAC rules ...
	I1007 12:08:42.247499  754935 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:08:42.247572  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:08:42.247711  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:08:42.247816  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:08:42.247916  754935 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:08:42.248014  754935 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:08:42.248150  754935 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:08:42.248229  754935 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:08:42.248297  754935 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:08:42.248306  754935 kubeadm.go:310] 
	I1007 12:08:42.248386  754935 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:08:42.248396  754935 kubeadm.go:310] 
	I1007 12:08:42.248494  754935 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:08:42.248503  754935 kubeadm.go:310] 
	I1007 12:08:42.248524  754935 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:08:42.248579  754935 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:08:42.248622  754935 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:08:42.248631  754935 kubeadm.go:310] 
	I1007 12:08:42.248701  754935 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:08:42.248711  754935 kubeadm.go:310] 
	I1007 12:08:42.248750  754935 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:08:42.248754  754935 kubeadm.go:310] 
	I1007 12:08:42.248844  754935 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:08:42.248919  754935 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:08:42.249011  754935 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:08:42.249030  754935 kubeadm.go:310] 
	I1007 12:08:42.249168  754935 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:08:42.249267  754935 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:08:42.249277  754935 kubeadm.go:310] 
	I1007 12:08:42.249486  754935 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hpfhac.k0ed3mhw422jku3i \
	I1007 12:08:42.249624  754935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 12:08:42.249655  754935 kubeadm.go:310] 	--control-plane 
	I1007 12:08:42.249662  754935 kubeadm.go:310] 
	I1007 12:08:42.249741  754935 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:08:42.249758  754935 kubeadm.go:310] 
	I1007 12:08:42.249867  754935 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hpfhac.k0ed3mhw422jku3i \
	I1007 12:08:42.250007  754935 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 12:08:42.250069  754935 cni.go:84] Creating CNI manager for ""
	I1007 12:08:42.250131  754935 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:08:42.251788  754935 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 12:08:42.253184  754935 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 12:08:42.268059  754935 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 12:08:42.289284  754935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:08:42.289372  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:42.289435  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-054971 minikube.k8s.io/updated_at=2024_10_07T12_08_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=addons-054971 minikube.k8s.io/primary=true
	I1007 12:08:42.442331  754935 ops.go:34] apiserver oom_adj: -16
	I1007 12:08:42.442529  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:42.942986  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:43.442778  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:43.942663  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:44.442794  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:44.942804  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:45.443049  754935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:08:45.541560  754935 kubeadm.go:1113] duration metric: took 3.252260027s to wait for elevateKubeSystemPrivileges
	I1007 12:08:45.541595  754935 kubeadm.go:394] duration metric: took 14.23754191s to StartCluster
	I1007 12:08:45.541616  754935 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:45.541851  754935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:08:45.542283  754935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:45.542492  754935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:08:45.542518  754935 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:45.542574  754935 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 12:08:45.542685  754935 addons.go:69] Setting yakd=true in profile "addons-054971"
	I1007 12:08:45.542703  754935 addons.go:234] Setting addon yakd=true in "addons-054971"
	I1007 12:08:45.542713  754935 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-054971"
	I1007 12:08:45.542715  754935 addons.go:69] Setting gcp-auth=true in profile "addons-054971"
	I1007 12:08:45.542742  754935 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-054971"
	I1007 12:08:45.542747  754935 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-054971"
	I1007 12:08:45.542761  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.542757  754935 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-054971"
	I1007 12:08:45.542767  754935 addons.go:69] Setting default-storageclass=true in profile "addons-054971"
	I1007 12:08:45.542778  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.542784  754935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-054971"
	I1007 12:08:45.542786  754935 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-054971"
	I1007 12:08:45.542808  754935 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-054971"
	I1007 12:08:45.542839  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.542857  754935 addons.go:69] Setting volcano=true in profile "addons-054971"
	I1007 12:08:45.542756  754935 mustload.go:65] Loading cluster: addons-054971
	I1007 12:08:45.542874  754935 addons.go:234] Setting addon volcano=true in "addons-054971"
	I1007 12:08:45.542901  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543034  754935 config.go:182] Loaded profile config "addons-054971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:45.543125  754935 addons.go:69] Setting ingress-dns=true in profile "addons-054971"
	I1007 12:08:45.543146  754935 addons.go:234] Setting addon ingress-dns=true in "addons-054971"
	I1007 12:08:45.543182  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543244  754935 addons.go:69] Setting inspektor-gadget=true in profile "addons-054971"
	I1007 12:08:45.543257  754935 addons.go:234] Setting addon inspektor-gadget=true in "addons-054971"
	I1007 12:08:45.543266  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543271  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543279  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543281  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543302  754935 addons.go:69] Setting volumesnapshots=true in profile "addons-054971"
	I1007 12:08:45.543343  754935 addons.go:69] Setting ingress=true in profile "addons-054971"
	I1007 12:08:45.543371  754935 addons.go:69] Setting storage-provisioner=true in profile "addons-054971"
	I1007 12:08:45.543410  754935 addons.go:234] Setting addon storage-provisioner=true in "addons-054971"
	I1007 12:08:45.543446  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543494  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543305  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543538  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543538  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543539  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543576  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543599  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543627  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543654  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543374  754935 addons.go:234] Setting addon ingress=true in "addons-054971"
	I1007 12:08:45.543347  754935 addons.go:234] Setting addon volumesnapshots=true in "addons-054971"
	I1007 12:08:45.543757  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543773  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.543872  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543905  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.544073  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.543308  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543318  754935 addons.go:69] Setting cloud-spanner=true in profile "addons-054971"
	I1007 12:08:45.544103  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.544114  754935 addons.go:234] Setting addon cloud-spanner=true in "addons-054971"
	I1007 12:08:45.543317  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543266  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.544169  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543329  754935 addons.go:69] Setting metrics-server=true in profile "addons-054971"
	I1007 12:08:45.544184  754935 addons.go:234] Setting addon metrics-server=true in "addons-054971"
	I1007 12:08:45.543345  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.544219  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.543384  754935 config.go:182] Loaded profile config "addons-054971": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:45.543331  754935 addons.go:69] Setting registry=true in profile "addons-054971"
	I1007 12:08:45.544261  754935 addons.go:234] Setting addon registry=true in "addons-054971"
	I1007 12:08:45.544353  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.544380  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.544444  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.544638  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.544787  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.545180  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.545211  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.547032  754935 out.go:177] * Verifying Kubernetes components...
	I1007 12:08:45.548525  754935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:45.564301  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I1007 12:08:45.564554  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I1007 12:08:45.564711  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.565207  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.565322  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I1007 12:08:45.565817  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.566054  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.566076  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.566305  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.566324  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.566584  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I1007 12:08:45.566754  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I1007 12:08:45.566921  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.567030  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.567171  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.567668  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.567708  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.567893  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.567915  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.567973  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.567993  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.567993  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I1007 12:08:45.568095  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.568224  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.568510  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.568457  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.568750  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.574551  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.574662  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.574712  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.574771  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.574852  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.574872  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.575124  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.575531  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I1007 12:08:45.574547  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.575696  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.576176  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.576218  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.580153  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.580536  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.580587  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.581222  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.581243  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.581761  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.581827  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.582385  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.582429  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.582638  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.582659  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.583039  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.583204  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.586262  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.586675  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.586725  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.599453  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I1007 12:08:45.599454  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I1007 12:08:45.600556  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.600673  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.601604  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.601624  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.601751  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.601764  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.602628  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.602681  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.603354  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.603405  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.604007  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.604058  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.607737  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I1007 12:08:45.608158  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.608675  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.608700  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.609049  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.609621  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.609676  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.611546  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I1007 12:08:45.612193  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.612323  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1007 12:08:45.612426  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I1007 12:08:45.613152  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.613263  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.613336  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I1007 12:08:45.613892  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.614117  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.614130  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.614273  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.614288  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.614356  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1007 12:08:45.614763  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.614905  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.615343  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.615363  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.615482  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.615536  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.615771  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.615832  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.616043  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.616062  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.616183  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.616195  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.616605  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.616637  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.617229  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.617434  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.618320  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.618590  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.618645  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.620600  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I1007 12:08:45.621190  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.621824  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.622105  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:45.622129  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:45.623250  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:45.623301  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:45.623310  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:45.623319  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:45.623326  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:45.624997  754935 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 12:08:45.626528  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.626858  754935 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 12:08:45.626879  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 12:08:45.626902  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.629046  754935 addons.go:234] Setting addon default-storageclass=true in "addons-054971"
	I1007 12:08:45.629105  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.629490  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.629533  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.631557  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.631642  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.631674  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.631697  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.631828  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.632047  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.632213  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.633477  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34525
	I1007 12:08:45.634071  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I1007 12:08:45.634788  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.635427  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.635478  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.635869  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.636479  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.636523  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.636744  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:45.636776  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:45.636799  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.636804  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 12:08:45.636930  754935 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 12:08:45.637576  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.637594  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.638105  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.638340  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.640225  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.642183  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:08:45.642841  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.642865  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.643548  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.643834  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.644454  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 12:08:45.644919  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1007 12:08:45.645642  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.646202  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.646225  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.646294  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.646464  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:08:45.647061  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.647192  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I1007 12:08:45.647278  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.647654  754935 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 12:08:45.647679  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 12:08:45.647700  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.647706  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.647906  754935 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 12:08:45.648905  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.648934  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.649080  754935 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 12:08:45.649097  754935 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 12:08:45.649118  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.649744  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.650248  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.650572  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.651096  754935 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 12:08:45.651435  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.652132  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.652173  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.652227  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 12:08:45.652244  754935 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 12:08:45.652275  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.652353  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.652515  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.652641  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.652768  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.654410  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.654692  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.654712  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.654864  754935 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-054971"
	I1007 12:08:45.654916  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:45.654985  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.655145  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.655294  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.655302  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.655347  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.655410  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.656131  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I1007 12:08:45.656173  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38459
	I1007 12:08:45.656698  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.657178  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.658969  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1007 12:08:45.658980  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.659095  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.659115  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.659135  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.659145  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.659157  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.659640  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.659646  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.659695  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.659857  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.660515  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.660684  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I1007 12:08:45.660686  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.661597  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.662445  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.662465  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.662539  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.662809  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.662957  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.663988  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.664010  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.664400  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.664417  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.664788  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.664841  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.664865  754935 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 12:08:45.665091  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.665290  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.665697  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.665732  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.666914  754935 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 12:08:45.666936  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 12:08:45.666974  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.666961  754935 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 12:08:45.668139  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.668865  754935 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 12:08:45.668884  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 12:08:45.669132  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.670327  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 12:08:45.670638  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.671202  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.671224  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.671265  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.671632  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.671900  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.672083  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I1007 12:08:45.672095  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.672709  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 12:08:45.673580  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.674336  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.674358  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.674751  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.674817  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45891
	I1007 12:08:45.674970  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.676100  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.676783  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 12:08:45.676987  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.678090  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.678169  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.678184  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.678624  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.678795  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.678972  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.679103  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.679240  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.679261  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.679693  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.679779  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 12:08:45.679837  754935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:08:45.680012  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.680715  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I1007 12:08:45.681266  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.681628  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.681694  754935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:08:45.681711  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:08:45.681730  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.681743  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.681760  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.682185  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.682680  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.682750  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I1007 12:08:45.683265  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I1007 12:08:45.683505  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 12:08:45.683507  754935 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 12:08:45.683885  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.684189  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.684668  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.684694  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.684876  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 12:08:45.684893  754935 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 12:08:45.684952  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.685177  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.685193  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.685258  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.685321  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I1007 12:08:45.685468  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.685611  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.685753  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.685764  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.686329  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 12:08:45.686348  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.686508  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.687340  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.686672  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.687180  754935 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 12:08:45.687975  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.688135  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.688394  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.688669  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.688785  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:45.688836  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:45.689005  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.689117  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.689157  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.689173  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.689272  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.689326  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.689455  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.689410  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.689536  754935 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 12:08:45.689563  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 12:08:45.689598  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.689639  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.690230  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.691679  754935 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 12:08:45.691700  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 12:08:45.691720  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.691787  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 12:08:45.692869  754935 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 12:08:45.692934  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 12:08:45.692949  754935 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 12:08:45.692971  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.694266  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 12:08:45.694294  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 12:08:45.694318  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.695427  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.696571  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.696605  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.696901  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.697184  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.697386  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.698141  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.698186  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.698783  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.698807  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.698908  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.699014  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.699069  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.699127  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.699753  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.700301  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.700314  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.700668  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.700790  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.700916  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.701020  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	W1007 12:08:45.701859  754935 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58134->192.168.39.62:22: read: connection reset by peer
	I1007 12:08:45.701887  754935 retry.go:31] will retry after 353.537159ms: ssh: handshake failed: read tcp 192.168.39.1:58134->192.168.39.62:22: read: connection reset by peer
	I1007 12:08:45.703218  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I1007 12:08:45.703730  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.704270  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.704298  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.704688  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.704898  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.706805  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.707157  754935 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:08:45.707179  754935 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:08:45.707201  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.710543  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.711036  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.711070  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.711246  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.711430  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.711593  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.711759  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1007 12:08:45.711762  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.712272  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:45.712778  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:45.712800  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:45.713128  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:45.713313  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:45.715110  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:45.717148  754935 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 12:08:45.718693  754935 out.go:177]   - Using image docker.io/busybox:stable
	I1007 12:08:45.720327  754935 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 12:08:45.720350  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 12:08:45.720376  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:45.723629  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.724138  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:45.724168  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:45.724281  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:45.724521  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:45.724648  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:45.724755  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:45.982189  754935 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:45.982242  754935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:08:46.008808  754935 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 12:08:46.008836  754935 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 12:08:46.041021  754935 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 12:08:46.041056  754935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 12:08:46.106083  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 12:08:46.156011  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 12:08:46.156044  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 12:08:46.160477  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 12:08:46.183422  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:08:46.203972  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 12:08:46.206238  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 12:08:46.216447  754935 node_ready.go:35] waiting up to 6m0s for node "addons-054971" to be "Ready" ...
	I1007 12:08:46.220662  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:08:46.228496  754935 node_ready.go:49] node "addons-054971" has status "Ready":"True"
	I1007 12:08:46.228535  754935 node_ready.go:38] duration metric: took 12.032192ms for node "addons-054971" to be "Ready" ...
	I1007 12:08:46.228550  754935 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:46.229043  754935 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 12:08:46.229067  754935 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 12:08:46.259577  754935 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.267553  754935 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 12:08:46.267579  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 12:08:46.332317  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 12:08:46.332345  754935 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 12:08:46.359856  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 12:08:46.359885  754935 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 12:08:46.390671  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 12:08:46.447060  754935 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 12:08:46.447093  754935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 12:08:46.469999  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 12:08:46.625565  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 12:08:46.625602  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 12:08:46.636197  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 12:08:46.636228  754935 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 12:08:46.706690  754935 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 12:08:46.706717  754935 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 12:08:46.794862  754935 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:08:46.794891  754935 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 12:08:46.801338  754935 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 12:08:46.801372  754935 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 12:08:46.966259  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 12:08:46.966286  754935 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 12:08:46.986946  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 12:08:46.986990  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 12:08:47.120617  754935 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 12:08:47.120645  754935 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 12:08:47.144043  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 12:08:47.144073  754935 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 12:08:47.181400  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:08:47.206188  754935 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 12:08:47.206217  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 12:08:47.284880  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 12:08:47.284907  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 12:08:47.503891  754935 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 12:08:47.503926  754935 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 12:08:47.549962  754935 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:08:47.549999  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 12:08:47.560887  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 12:08:47.726195  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 12:08:47.726225  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 12:08:47.877361  754935 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 12:08:47.877421  754935 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 12:08:47.885192  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:08:48.026002  754935 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 12:08:48.026070  754935 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 12:08:48.144376  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 12:08:48.144414  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 12:08:48.194374  754935 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 12:08:48.194405  754935 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 12:08:48.274233  754935 pod_ready.go:103] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:48.485862  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 12:08:48.485978  754935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 12:08:48.515469  754935 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.533183499s)
	I1007 12:08:48.515512  754935 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:08:48.554460  754935 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 12:08:48.554494  754935 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 12:08:48.815502  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 12:08:48.815538  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 12:08:48.941914  754935 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 12:08:48.941951  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 12:08:49.029669  754935 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-054971" context rescaled to 1 replicas
	I1007 12:08:49.316099  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 12:08:49.329283  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 12:08:49.329315  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 12:08:49.578162  754935 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 12:08:49.578196  754935 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 12:08:49.935675  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 12:08:50.291932  754935 pod_ready.go:103] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:52.320613  754935 pod_ready.go:103] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:52.697432  754935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 12:08:52.697482  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:52.700919  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:52.701409  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:52.701443  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:52.701676  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:52.701949  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:52.702192  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:52.702387  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:52.774863  754935 pod_ready.go:93] pod "etcd-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:52.774889  754935 pod_ready.go:82] duration metric: took 6.515270309s for pod "etcd-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:52.774903  754935 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:53.450019  754935 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 12:08:53.721169  754935 addons.go:234] Setting addon gcp-auth=true in "addons-054971"
	I1007 12:08:53.721243  754935 host.go:66] Checking if "addons-054971" exists ...
	I1007 12:08:53.721580  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:53.721638  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:53.738245  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I1007 12:08:53.738924  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:53.739520  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:53.739549  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:53.739937  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:53.740581  754935 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:53.740638  754935 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:53.757383  754935 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I1007 12:08:53.757915  754935 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:53.758429  754935 main.go:141] libmachine: Using API Version  1
	I1007 12:08:53.758453  754935 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:53.758859  754935 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:53.759066  754935 main.go:141] libmachine: (addons-054971) Calling .GetState
	I1007 12:08:53.760830  754935 main.go:141] libmachine: (addons-054971) Calling .DriverName
	I1007 12:08:53.761093  754935 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 12:08:53.761131  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHHostname
	I1007 12:08:53.763845  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:53.764290  754935 main.go:141] libmachine: (addons-054971) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:35:95", ip: ""} in network mk-addons-054971: {Iface:virbr1 ExpiryTime:2024-10-07 13:08:14 +0000 UTC Type:0 Mac:52:54:00:06:35:95 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-054971 Clientid:01:52:54:00:06:35:95}
	I1007 12:08:53.764325  754935 main.go:141] libmachine: (addons-054971) DBG | domain addons-054971 has defined IP address 192.168.39.62 and MAC address 52:54:00:06:35:95 in network mk-addons-054971
	I1007 12:08:53.764492  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHPort
	I1007 12:08:53.764667  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHKeyPath
	I1007 12:08:53.764823  754935 main.go:141] libmachine: (addons-054971) Calling .GetSSHUsername
	I1007 12:08:53.765009  754935 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/addons-054971/id_rsa Username:docker}
	I1007 12:08:54.850890  754935 pod_ready.go:103] pod "kube-apiserver-addons-054971" in "kube-system" namespace has status "Ready":"False"
	I1007 12:08:55.453313  754935 pod_ready.go:93] pod "kube-apiserver-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:55.453345  754935 pod_ready.go:82] duration metric: took 2.678432172s for pod "kube-apiserver-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.453361  754935 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.496750  754935 pod_ready.go:93] pod "kube-controller-manager-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:55.496777  754935 pod_ready.go:82] duration metric: took 43.407725ms for pod "kube-controller-manager-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.496788  754935 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.529741  754935 pod_ready.go:93] pod "kube-scheduler-addons-054971" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:55.529769  754935 pod_ready.go:82] duration metric: took 32.973081ms for pod "kube-scheduler-addons-054971" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:55.529779  754935 pod_ready.go:39] duration metric: took 9.301214659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:55.529808  754935 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:08:55.529865  754935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:08:55.836162  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.730027198s)
	I1007 12:08:55.836230  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.675714214s)
	I1007 12:08:55.836275  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836290  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836294  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.652839471s)
	I1007 12:08:55.836318  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836334  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836371  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.63236799s)
	I1007 12:08:55.836432  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.615742381s)
	I1007 12:08:55.836456  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836472  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836492  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.445788034s)
	I1007 12:08:55.836456  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836667  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836855  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.655416126s)
	I1007 12:08:55.836236  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836886  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836913  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836923  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836930  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.276000758s)
	I1007 12:08:55.836409  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.630146814s)
	I1007 12:08:55.836948  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836961  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836963  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.836969  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836605  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.836626  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.837004  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.837012  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837018  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836642  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837052  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836741  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.837082  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.837089  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837094  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836755  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.836755  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.36668111s)
	I1007 12:08:55.837248  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837257  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.836764  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.837302  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.837309  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837315  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.837318  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.952087176s)
	I1007 12:08:55.836769  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	W1007 12:08:55.837351  754935 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 12:08:55.837394  754935 retry.go:31] will retry after 216.590122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 12:08:55.837491  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.52134421s)
	I1007 12:08:55.837524  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.837535  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839150  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839255  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839277  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839294  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839310  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839427  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839450  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839504  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839525  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839632  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839661  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839687  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839701  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839707  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839738  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839769  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839790  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839807  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.839829  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839836  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.839847  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839696  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839738  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839809  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.839830  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840125  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840135  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.840143  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839773  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840192  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840493  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.840515  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.840540  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840546  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840638  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.840662  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.840670  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.840679  754935 addons.go:475] Verifying addon metrics-server=true in "addons-054971"
	I1007 12:08:55.841255  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.841284  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.841290  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.841300  754935 addons.go:475] Verifying addon registry=true in "addons-054971"
	I1007 12:08:55.842366  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842397  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842403  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842412  754935 addons.go:475] Verifying addon ingress=true in "addons-054971"
	I1007 12:08:55.842540  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842551  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842559  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.842565  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.842612  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842628  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842638  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842645  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842649  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842652  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.842655  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.842659  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.842916  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.842940  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.842947  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.839715  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.843094  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.843113  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.839723  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843269  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843353  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.843366  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.843375  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843410  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.843416  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.843739  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.843767  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.843774  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.845013  754935 out.go:177] * Verifying registry addon...
	I1007 12:08:55.845126  754935 out.go:177] * Verifying ingress addon...
	I1007 12:08:55.846770  754935 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-054971 service yakd-dashboard -n yakd-dashboard
	
	I1007 12:08:55.847754  754935 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 12:08:55.847885  754935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 12:08:55.880361  754935 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 12:08:55.880388  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:55.880638  754935 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 12:08:55.880664  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:55.898882  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.898915  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.899233  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.899253  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:55.904127  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:55.904150  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:55.904452  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:55.904463  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:55.904479  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 12:08:55.904579  754935 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1007 12:08:56.054489  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:08:56.354370  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:56.355745  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:56.756460  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.820728204s)
	I1007 12:08:56.756534  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:56.756538  754935 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.995412518s)
	I1007 12:08:56.756592  754935 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.226707172s)
	I1007 12:08:56.756623  754935 api_server.go:72] duration metric: took 11.214072275s to wait for apiserver process to appear ...
	I1007 12:08:56.756636  754935 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:08:56.756552  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:56.756664  754935 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I1007 12:08:56.756932  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:56.756948  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:56.756958  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:56.756964  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:56.757192  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:56.757205  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:56.757218  754935 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-054971"
	I1007 12:08:56.759144  754935 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 12:08:56.759144  754935 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:08:56.760816  754935 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 12:08:56.761441  754935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 12:08:56.762459  754935 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 12:08:56.762485  754935 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 12:08:56.825069  754935 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I1007 12:08:56.826829  754935 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 12:08:56.826852  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:56.827978  754935 api_server.go:141] control plane version: v1.31.1
	I1007 12:08:56.828001  754935 api_server.go:131] duration metric: took 71.356494ms to wait for apiserver health ...
	I1007 12:08:56.828013  754935 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:08:56.856006  754935 system_pods.go:59] 18 kube-system pods found
	I1007 12:08:56.856047  754935 system_pods.go:61] "coredns-7c65d6cfc9-4hjxz" [0c0e4892-3fa9-48d3-817a-849a323b94c1] Running
	I1007 12:08:56.856054  754935 system_pods.go:61] "coredns-7c65d6cfc9-crd5w" [a29dac23-0aea-4b3e-9a36-6a4631124b86] Running
	I1007 12:08:56.856064  754935 system_pods.go:61] "csi-hostpath-attacher-0" [8cf94124-02f6-4ca0-a0ed-a0451f57672f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 12:08:56.856072  754935 system_pods.go:61] "csi-hostpath-resizer-0" [85aef89b-cbdf-4f43-9f6b-c28b0ddb19c5] Pending
	I1007 12:08:56.856084  754935 system_pods.go:61] "csi-hostpathplugin-drczb" [dd5db9a2-ce24-463e-abd7-3d0e4ff66cb3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 12:08:56.856091  754935 system_pods.go:61] "etcd-addons-054971" [faabeedb-9d17-4edf-8213-28f0cfc6c6e4] Running
	I1007 12:08:56.856099  754935 system_pods.go:61] "kube-apiserver-addons-054971" [1c00ede0-c30d-42bd-9575-9c06801f6d8a] Running
	I1007 12:08:56.856104  754935 system_pods.go:61] "kube-controller-manager-addons-054971" [44c0feb8-0a14-41c2-8b98-9f6ddb7d979f] Running
	I1007 12:08:56.856113  754935 system_pods.go:61] "kube-ingress-dns-minikube" [06754245-57c9-4323-bfce-bbbe4c9f27ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1007 12:08:56.856121  754935 system_pods.go:61] "kube-proxy-h7ccq" [80f6db92-9b23-4fb4-8fac-2a32f9da0874] Running
	I1007 12:08:56.856129  754935 system_pods.go:61] "kube-scheduler-addons-054971" [c3f1df88-f63c-47cd-a7df-594a861f6101] Running
	I1007 12:08:56.856139  754935 system_pods.go:61] "metrics-server-84c5f94fbc-hglsg" [bc1b53d0-93d0-4734-bfbe-9b7172391a6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:08:56.856154  754935 system_pods.go:61] "nvidia-device-plugin-daemonset-285h8" [cf2c616e-a6ca-4d0d-8e9b-c62ea66a2246] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 12:08:56.856165  754935 system_pods.go:61] "registry-66c9cd494c-77gfb" [256d2114-d21b-4d85-a9d9-a1f7e3e0a43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 12:08:56.856174  754935 system_pods.go:61] "registry-proxy-vjrwk" [bdc2b33d-c287-48c5-a525-9c0e3933f162] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 12:08:56.856187  754935 system_pods.go:61] "snapshot-controller-56fcc65765-2rx2g" [b676e4bc-336d-421a-b68e-c54457192fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.856196  754935 system_pods.go:61] "snapshot-controller-56fcc65765-7khhx" [b27bbafd-fba3-4526-b91f-ccfdcf2cf397] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.856202  754935 system_pods.go:61] "storage-provisioner" [48ad7da8-0680-4936-ac5a-a4de591e0b9c] Running
	I1007 12:08:56.856211  754935 system_pods.go:74] duration metric: took 28.19103ms to wait for pod list to return data ...
	I1007 12:08:56.856224  754935 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:08:56.875257  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:56.877165  754935 default_sa.go:45] found service account: "default"
	I1007 12:08:56.877194  754935 default_sa.go:55] duration metric: took 20.961727ms for default service account to be created ...
	I1007 12:08:56.877206  754935 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:08:56.877465  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:56.889700  754935 system_pods.go:86] 18 kube-system pods found
	I1007 12:08:56.889739  754935 system_pods.go:89] "coredns-7c65d6cfc9-4hjxz" [0c0e4892-3fa9-48d3-817a-849a323b94c1] Running
	I1007 12:08:56.889745  754935 system_pods.go:89] "coredns-7c65d6cfc9-crd5w" [a29dac23-0aea-4b3e-9a36-6a4631124b86] Running
	I1007 12:08:56.889752  754935 system_pods.go:89] "csi-hostpath-attacher-0" [8cf94124-02f6-4ca0-a0ed-a0451f57672f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 12:08:56.889760  754935 system_pods.go:89] "csi-hostpath-resizer-0" [85aef89b-cbdf-4f43-9f6b-c28b0ddb19c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 12:08:56.889770  754935 system_pods.go:89] "csi-hostpathplugin-drczb" [dd5db9a2-ce24-463e-abd7-3d0e4ff66cb3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 12:08:56.889775  754935 system_pods.go:89] "etcd-addons-054971" [faabeedb-9d17-4edf-8213-28f0cfc6c6e4] Running
	I1007 12:08:56.889779  754935 system_pods.go:89] "kube-apiserver-addons-054971" [1c00ede0-c30d-42bd-9575-9c06801f6d8a] Running
	I1007 12:08:56.889783  754935 system_pods.go:89] "kube-controller-manager-addons-054971" [44c0feb8-0a14-41c2-8b98-9f6ddb7d979f] Running
	I1007 12:08:56.889788  754935 system_pods.go:89] "kube-ingress-dns-minikube" [06754245-57c9-4323-bfce-bbbe4c9f27ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1007 12:08:56.889792  754935 system_pods.go:89] "kube-proxy-h7ccq" [80f6db92-9b23-4fb4-8fac-2a32f9da0874] Running
	I1007 12:08:56.889795  754935 system_pods.go:89] "kube-scheduler-addons-054971" [c3f1df88-f63c-47cd-a7df-594a861f6101] Running
	I1007 12:08:56.889801  754935 system_pods.go:89] "metrics-server-84c5f94fbc-hglsg" [bc1b53d0-93d0-4734-bfbe-9b7172391a6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:08:56.889809  754935 system_pods.go:89] "nvidia-device-plugin-daemonset-285h8" [cf2c616e-a6ca-4d0d-8e9b-c62ea66a2246] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 12:08:56.889815  754935 system_pods.go:89] "registry-66c9cd494c-77gfb" [256d2114-d21b-4d85-a9d9-a1f7e3e0a43a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 12:08:56.889820  754935 system_pods.go:89] "registry-proxy-vjrwk" [bdc2b33d-c287-48c5-a525-9c0e3933f162] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 12:08:56.889825  754935 system_pods.go:89] "snapshot-controller-56fcc65765-2rx2g" [b676e4bc-336d-421a-b68e-c54457192fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.889831  754935 system_pods.go:89] "snapshot-controller-56fcc65765-7khhx" [b27bbafd-fba3-4526-b91f-ccfdcf2cf397] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:08:56.889835  754935 system_pods.go:89] "storage-provisioner" [48ad7da8-0680-4936-ac5a-a4de591e0b9c] Running
	I1007 12:08:56.889844  754935 system_pods.go:126] duration metric: took 12.630727ms to wait for k8s-apps to be running ...
	I1007 12:08:56.889853  754935 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:08:56.889908  754935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:56.964918  754935 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 12:08:56.964955  754935 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 12:08:57.077542  754935 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 12:08:57.077570  754935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 12:08:57.135548  754935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 12:08:57.266343  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:57.353253  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:57.353330  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:57.769639  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:57.853163  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:57.853627  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:58.269889  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:58.352912  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:58.353071  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:58.624474  754935 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.734536155s)
	I1007 12:08:58.624589  754935 system_svc.go:56] duration metric: took 1.734729955s WaitForService to wait for kubelet
	I1007 12:08:58.624607  754935 kubeadm.go:582] duration metric: took 13.082055472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:08:58.624634  754935 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:08:58.624533  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.569981166s)
	I1007 12:08:58.624696  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.624715  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.625035  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.625055  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.625065  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.625072  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.625115  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:58.625284  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.625313  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.628190  754935 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:58.628220  754935 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:58.628233  754935 node_conditions.go:105] duration metric: took 3.590382ms to run NodePressure ...
	I1007 12:08:58.628248  754935 start.go:241] waiting for startup goroutines ...
	I1007 12:08:58.767925  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:58.885422  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:58.885765  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:58.915303  754935 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.779703404s)
	I1007 12:08:58.915365  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.915383  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.915719  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.915739  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.915748  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:58.915757  754935 main.go:141] libmachine: Making call to close driver server
	I1007 12:08:58.915774  754935 main.go:141] libmachine: (addons-054971) Calling .Close
	I1007 12:08:58.916032  754935 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:08:58.916057  754935 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:08:58.917239  754935 addons.go:475] Verifying addon gcp-auth=true in "addons-054971"
	I1007 12:08:58.917581  754935 main.go:141] libmachine: (addons-054971) DBG | Closing plugin on server side
	I1007 12:08:58.918962  754935 out.go:177] * Verifying gcp-auth addon...
	I1007 12:08:58.920707  754935 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 12:08:58.981034  754935 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 12:08:58.981083  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:08:59.270266  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:59.369205  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:59.369975  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:59.424588  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:08:59.767379  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:08:59.853162  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:08:59.853322  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:08:59.924874  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:00.266994  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:00.353125  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:00.353641  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:00.425815  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:00.769125  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:00.852627  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:00.852852  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:00.924629  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:01.265766  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:01.353507  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:01.353654  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:01.424442  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:01.768449  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:01.853613  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:01.854037  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:01.925089  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:02.266985  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:02.353094  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:02.353767  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:02.423737  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:02.766533  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:02.852776  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:02.853310  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:02.924827  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:03.266180  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:03.352684  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:03.353284  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:03.424810  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:03.767280  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:03.852859  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:03.853181  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:03.926551  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:04.266926  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:04.353042  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:04.353201  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:04.425034  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:04.766382  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:04.852544  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:04.853056  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:04.924839  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:05.266262  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:05.352437  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:05.352823  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:05.424280  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:05.766788  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:05.852411  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:05.852959  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:05.925163  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:06.266760  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:06.354501  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:06.355969  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:06.425350  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:06.766752  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:06.853227  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:06.853550  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:06.925891  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:07.266350  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:07.353153  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:07.353767  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:07.425389  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:07.767002  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:07.852878  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:07.853371  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:07.925196  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:08.266684  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:08.352320  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:08.353035  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:08.424523  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:08.766190  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:08.852754  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:08.853252  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:08.925127  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:09.267509  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:09.352084  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:09.352405  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:09.424308  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:09.767184  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:09.851383  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:09.851943  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:09.925759  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:10.266292  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:10.352038  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:10.353248  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:10.428569  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:10.945862  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:10.947533  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:10.947953  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:10.949652  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:11.266808  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:11.352282  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:11.352966  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:11.424565  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:11.767027  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:11.852780  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:11.853312  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:11.924743  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:12.267917  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:12.354818  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:12.354822  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:12.460273  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:12.768272  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:12.852646  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:12.852704  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:12.924571  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:13.266363  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:13.352291  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:13.352840  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:13.424497  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:13.767792  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:13.866097  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:13.866392  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:13.924917  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:14.267258  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:14.352471  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:14.352882  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:14.424903  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:14.767195  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:14.852080  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:14.852820  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:14.924872  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:15.266426  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:15.352399  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:15.352593  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:15.424436  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:15.766194  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:15.853744  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:15.854256  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:15.924382  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:16.498802  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:16.500298  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:16.500940  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:16.501136  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:16.766823  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:16.853600  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:16.854128  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:16.924537  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:17.267373  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:17.352644  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:17.353055  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:17.424830  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:17.770753  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:17.866012  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:17.866328  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:17.925077  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:18.266797  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:18.352784  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:18.353249  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:18.424539  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:18.766248  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:18.852406  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:18.854819  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:18.925036  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:19.267099  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:19.352192  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:19.352713  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:19.424706  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:19.765953  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:19.854548  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:19.854968  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:19.924552  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:20.272921  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:20.352386  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:20.352747  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:20.424593  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:20.766631  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:20.853208  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:20.854193  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:20.924869  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:21.267417  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:21.353199  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:21.353610  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:21.424192  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:21.767187  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:21.853769  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:21.853880  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:21.924527  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:22.270751  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:22.353719  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:22.354353  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:22.423982  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:22.770386  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:22.852511  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:09:22.852766  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:22.924534  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:23.266759  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:23.353265  754935 kapi.go:107] duration metric: took 27.505373211s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 12:09:23.353569  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:23.424786  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:23.765973  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:23.853140  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:23.932052  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:24.267319  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:24.355619  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:24.425688  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:24.766437  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:24.852655  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:24.924453  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:25.267523  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:25.352315  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:25.425133  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:25.767624  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:25.867377  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:25.966696  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:26.276636  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:26.362199  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:26.425377  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:26.767150  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:26.853058  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:26.924640  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:27.265767  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:27.365240  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:27.424878  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:27.766565  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:27.866980  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:27.924387  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:28.268699  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:28.353790  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:28.424246  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:28.766950  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:28.852657  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:28.924976  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:29.266127  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:29.352431  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:29.424300  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:29.768661  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:29.853940  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:30.103123  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:30.286115  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:30.387535  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:30.424342  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:30.767717  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:30.851892  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:30.925548  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:31.266559  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:31.352942  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:31.424545  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:31.767683  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:31.853372  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:31.924541  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:32.494365  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:32.494974  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:32.495155  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:32.767575  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:32.852251  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:32.925132  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:33.268381  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:33.353082  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:33.425160  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:33.766867  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:33.851862  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:33.924545  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:34.268091  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:34.351857  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:34.424371  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:34.765859  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:34.851795  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:34.925125  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:35.268649  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:35.352398  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:35.424882  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:35.767355  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:35.852793  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:35.924543  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:36.266951  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:36.373259  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:36.466696  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:36.766379  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:36.852580  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:36.952382  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:37.267559  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:37.352116  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:37.424362  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:37.766743  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:37.852633  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:37.924879  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:38.267249  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:38.367943  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:38.426503  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:38.766731  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:38.851596  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:38.932390  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:39.267337  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:39.352631  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:39.424636  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:39.789346  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:39.888521  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:39.924875  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:40.267361  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:40.353099  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:40.424750  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:40.767391  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:40.851753  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:40.925807  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:41.266655  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:41.352388  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:41.425381  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:41.767235  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:41.853131  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:41.925362  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:42.266631  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:42.367801  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:42.424173  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:42.814241  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:42.861318  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:42.931564  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:43.269098  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:43.353398  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:43.425203  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:43.789726  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:43.879474  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:43.979494  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:44.267436  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:44.352937  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:44.423892  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:44.778559  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:44.852534  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:44.926925  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:45.266966  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:45.352710  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:45.424474  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:45.766470  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:45.852957  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:45.924810  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:46.266386  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:46.352077  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:46.424494  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:46.767310  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:46.853293  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:46.924948  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:47.270288  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:47.352557  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:47.424096  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:47.773226  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:47.853477  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:47.925278  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:48.266619  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:48.352509  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:48.424845  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:48.767847  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:48.853371  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:48.924754  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:49.266835  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:49.352290  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:49.425393  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:49.771811  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:49.874544  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:49.967536  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:50.266594  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:50.353520  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:50.423929  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:50.767844  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:50.852663  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:50.927497  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:51.267700  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:51.351929  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:51.424934  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:51.766470  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:09:51.853255  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:51.925107  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:52.267149  754935 kapi.go:107] duration metric: took 55.505705179s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 12:09:52.367849  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:52.424233  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:52.853622  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:52.926552  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:53.353261  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:53.425551  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:53.859753  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:53.926785  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:54.352129  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:54.425419  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:54.853884  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:54.924826  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:55.352372  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:55.424246  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:55.851706  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:55.924167  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:56.353304  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:56.424983  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:56.853036  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:56.925262  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:57.356363  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:57.425307  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:57.852836  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:57.924421  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:58.352935  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:58.425098  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:58.857757  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:58.924406  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:59.353015  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:59.424665  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:09:59.853618  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:09:59.924109  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:00.353176  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:00.424700  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:00.852733  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:00.924286  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:01.353118  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:01.425554  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:01.852987  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:01.925156  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:02.355988  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:02.425561  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:02.853731  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:02.924606  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:03.358359  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:03.426677  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:03.853043  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:03.924421  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:04.353398  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:04.425286  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:04.853118  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:04.924547  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:05.353115  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:05.424764  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:05.852319  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:05.924857  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:06.352547  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:06.435026  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:06.860538  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:06.923877  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:07.352359  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:07.424971  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:07.852261  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:07.924949  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:08.352685  754935 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:10:08.430432  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:08.855282  754935 kapi.go:107] duration metric: took 1m13.007522676s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 12:10:08.953754  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:09.424811  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:09.924532  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:10.424655  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:10.937460  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:11.425194  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:11.925034  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:12.424800  754935 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:10:12.925416  754935 kapi.go:107] duration metric: took 1m14.004702482s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 12:10:12.927746  754935 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-054971 cluster.
	I1007 12:10:12.929654  754935 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 12:10:12.931141  754935 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 12:10:12.932790  754935 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 12:10:12.934127  754935 addons.go:510] duration metric: took 1m27.391558313s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 12:10:12.934173  754935 start.go:246] waiting for cluster config update ...
	I1007 12:10:12.934199  754935 start.go:255] writing updated cluster config ...
	I1007 12:10:12.934493  754935 ssh_runner.go:195] Run: rm -f paused
	I1007 12:10:12.993626  754935 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:10:12.995881  754935 out.go:177] * Done! kubectl is now configured to use "addons-054971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.896127530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303824896100190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90c51a8a-cda6-48a0-b5a4-8cc9c6473048 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.896714565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=109a7c43-d36f-4434-ad66-e77ffb840263 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.896770271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=109a7c43-d36f-4434-ad66-e77ffb840263 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.897197973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0650e611514cc0dc6fcb31410d3c31d47fb7189daa5161d679a83e02222c6a7a,PodSandboxId:9a718404b3ae40829d27c8d587c727c1b188195e3d403496dab1b4e54de081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728303669646227586,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25d5204e-dbd2-40d4-8608-1c35f98a64d1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9ce9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1cd52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b
30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=109a7c43-d36f-4434-ad66-e77ffb840263 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.939172190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a38baab0-cbed-4450-b20d-56e0e89ae54c name=/runtime.v1.RuntimeService/Version
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.939309402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a38baab0-cbed-4450-b20d-56e0e89ae54c name=/runtime.v1.RuntimeService/Version
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.940587595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9212aead-035f-4497-953f-030d2a1a4533 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.942189045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303824942158318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9212aead-035f-4497-953f-030d2a1a4533 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.942758345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2478c43-dfe2-46f8-bbbc-732e3b757775 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.942810384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2478c43-dfe2-46f8-bbbc-732e3b757775 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.943111324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0650e611514cc0dc6fcb31410d3c31d47fb7189daa5161d679a83e02222c6a7a,PodSandboxId:9a718404b3ae40829d27c8d587c727c1b188195e3d403496dab1b4e54de081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728303669646227586,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25d5204e-dbd2-40d4-8608-1c35f98a64d1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9ce9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1cd52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b
30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2478c43-dfe2-46f8-bbbc-732e3b757775 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.984584380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cce86ae-a7bd-4c7a-aee7-89757f3c6015 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.984659769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cce86ae-a7bd-4c7a-aee7-89757f3c6015 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.985836957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80ceb5a5-6e12-4314-ad08-042e82e44202 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.987076642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303824987047560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80ceb5a5-6e12-4314-ad08-042e82e44202 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.988045671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ff226a2-b7d6-4b10-8ff3-c1afecda42f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.988108650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ff226a2-b7d6-4b10-8ff3-c1afecda42f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:44 addons-054971 crio[664]: time="2024-10-07 12:23:44.988382885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0650e611514cc0dc6fcb31410d3c31d47fb7189daa5161d679a83e02222c6a7a,PodSandboxId:9a718404b3ae40829d27c8d587c727c1b188195e3d403496dab1b4e54de081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728303669646227586,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25d5204e-dbd2-40d4-8608-1c35f98a64d1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9ce9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1cd52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b
30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ff226a2-b7d6-4b10-8ff3-c1afecda42f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.023732021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=912609d2-06f0-44a6-9e27-eeb88dea3806 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.023826951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=912609d2-06f0-44a6-9e27-eeb88dea3806 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.025074187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=750088c1-a7ec-42b7-b29a-70f96ab4569a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.026306907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303825026279241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=750088c1-a7ec-42b7-b29a-70f96ab4569a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.026893401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2e9d6ec-5699-44f7-8958-445aa5fa00a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.027015971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2e9d6ec-5699-44f7-8958-445aa5fa00a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:23:45 addons-054971 crio[664]: time="2024-10-07 12:23:45.027332133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0650e611514cc0dc6fcb31410d3c31d47fb7189daa5161d679a83e02222c6a7a,PodSandboxId:9a718404b3ae40829d27c8d587c727c1b188195e3d403496dab1b4e54de081c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728303669646227586,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25d5204e-dbd2-40d4-8608-1c35f98a64d1,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0329e96353d18b51984ea4afb229ccea97d81a41934a9b7484f7fad01fed0f56,PodSandboxId:7e46180aa3fba60945805f9510549196e3f597cc2049272f9c40ba06bb509d74,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728303664321412390,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-s89lv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5740457f-53c5-4243-9e12-c18af2dffe4b,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e8dc292329c378d3f229bcc507039d10f9613ff37a8250fee236bc911e9da6,PodSandboxId:47200705a7bbeb015472982847345d9dac411a2bf8719c254b953bc80b1fe383,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728303525671060635,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cca3b2af-dfec-4a2d-99be-b6c1e43f30f7,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9b4f398977608069bae3280e197c5fe0b18725d5134ffbec88eac66905a112,PodSandboxId:f5e2b972cf2e948b8a871e19ac3e8974b7ce717c0676bd25e01f250b92bc7ef1,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728302965765481715,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-hglsg,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: bc1b53d0-93d0-4734-bfbe-9b7172391a6d,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6290bd3b1143e9ce9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947,PodSandboxId:cde800b2a8f0d0d3eb352cc5aa876d23e1c82e19b4e53f0dff69e4fcc0d0c2e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302932778056723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ad7da8-0680-4936-ac5a-a4de591e0b9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e39e62975d11c5d57722dec1cd52b041c4a7f3837a1effbadf1312b703d595,PodSandboxId:82922f57009b8b99ecbf7332c72f8e57e9f5a584e64a6d330bcbfe72b72a4fe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302929983093884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-crd5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29dac23-0aea-4b3e-9a36-6a4631124b86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4,PodSandboxId:8117d0f36c05d767d033c5c07f159f80a463efa3d2f91506fc9586b18b29764f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302927537372572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7ccq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f6db92-9b23-4fb4-8fac-2a32f9da0874,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3,PodSandboxId:387176b55b1d948ca1cb2d0a814f81d132e2ec2c718370f3c848d83c672523dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302915936829530,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30db22b1e86da3ab0b0edc6ea43ef0f8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66,PodSandboxId:96caf494047079117631cda682773586b7fdaa3db547d5dd30f80510c9cbb893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b
30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302915948394877,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1bb8c38ad378b4c94d7421bbfc015b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f,PodSandboxId:a935c7d53a49425d0240e73d10a27a15e8e3b581ea5c6a3f9848f820f2daeb28,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302915943426980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a005392a92bea19217e8a14af82e23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee,PodSandboxId:308ea20d64722aa2d1ad36f935b68a22bf59879c853375096520987a4861fa32,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8302915787275223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-054971,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b37e72a0b142ff5d421a916f914bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2e9d6ec-5699-44f7-8958-445aa5fa00a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0650e611514cc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     2 minutes ago       Running             busybox                   0                   9a718404b3ae4       busybox
	0329e96353d18       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   7e46180aa3fba       hello-world-app-55bf9c44b4-s89lv
	24e8dc292329c       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   47200705a7bbe       nginx
	bf9b4f3989776       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   f5e2b972cf2e9       metrics-server-84c5f94fbc-hglsg
	6290bd3b1143e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   cde800b2a8f0d       storage-provisioner
	f8e39e62975d1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago      Running             coredns                   0                   82922f57009b8       coredns-7c65d6cfc9-crd5w
	ae25a8ac9ad8c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        14 minutes ago      Running             kube-proxy                0                   8117d0f36c05d       kube-proxy-h7ccq
	51d1c23abfaa0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   96caf49404707       kube-controller-manager-addons-054971
	2a4c6b918b992       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   a935c7d53a494       etcd-addons-054971
	2c09712050f97       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   387176b55b1d9       kube-scheduler-addons-054971
	e53eb6f322c1b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   308ea20d64722       kube-apiserver-addons-054971
	
	
	==> coredns [f8e39e62975d11c5d57722dec1cd52b041c4a7f3837a1effbadf1312b703d595] <==
	[INFO] 10.244.0.20:36474 - 6787 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000128509s
	[INFO] 10.244.0.20:36474 - 58044 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000117953s
	[INFO] 10.244.0.20:36474 - 53912 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000116572s
	[INFO] 10.244.0.20:36474 - 54154 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000143862s
	[INFO] 10.244.0.20:44150 - 30383 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089518s
	[INFO] 10.244.0.20:44150 - 47435 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059748s
	[INFO] 10.244.0.20:44150 - 14121 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000149188s
	[INFO] 10.244.0.20:44150 - 40796 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085048s
	[INFO] 10.244.0.20:44150 - 30510 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075989s
	[INFO] 10.244.0.20:44150 - 25893 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085845s
	[INFO] 10.244.0.20:44150 - 50207 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062368s
	[INFO] 10.244.0.20:59441 - 39632 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000141335s
	[INFO] 10.244.0.20:53304 - 36026 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000162294s
	[INFO] 10.244.0.20:53304 - 62576 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059358s
	[INFO] 10.244.0.20:59441 - 14528 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070508s
	[INFO] 10.244.0.20:53304 - 51085 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039444s
	[INFO] 10.244.0.20:53304 - 2394 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003399s
	[INFO] 10.244.0.20:53304 - 65378 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037879s
	[INFO] 10.244.0.20:53304 - 3963 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033856s
	[INFO] 10.244.0.20:59441 - 20103 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061695s
	[INFO] 10.244.0.20:53304 - 29733 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038284s
	[INFO] 10.244.0.20:59441 - 51215 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054973s
	[INFO] 10.244.0.20:59441 - 38423 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000878s
	[INFO] 10.244.0.20:59441 - 8029 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007873s
	[INFO] 10.244.0.20:59441 - 6116 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070552s
	
	
	==> describe nodes <==
	Name:               addons-054971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-054971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=addons-054971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-054971
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-054971
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:21:17 +0000   Mon, 07 Oct 2024 12:08:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:21:17 +0000   Mon, 07 Oct 2024 12:08:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:21:17 +0000   Mon, 07 Oct 2024 12:08:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:21:17 +0000   Mon, 07 Oct 2024 12:08:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    addons-054971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 66985a485f274232a41b8a9bf0356c4d
	  System UUID:                66985a48-5f27-4232-a41b-8a9bf0356c4d
	  Boot ID:                    7facc38f-b76d-4fb6-87a9-bdc599b7c391
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-s89lv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 coredns-7c65d6cfc9-crd5w                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-054971                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-054971             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-054971    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-h7ccq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-054971             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-hglsg          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-054971 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-054971 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-054971 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-054971 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-054971 event: Registered Node addons-054971 in Controller
	
	
	==> dmesg <==
	[  +0.086397] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.335700] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +1.336407] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.079195] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.013636] kauditd_printk_skb: 101 callbacks suppressed
	[Oct 7 12:09] kauditd_printk_skb: 80 callbacks suppressed
	[ +19.341360] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.279239] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.927615] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.717813] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.606051] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.392125] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 7 12:10] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.934403] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.973486] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 12:18] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.441161] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.040619] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.012103] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.259381] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.075935] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 7 12:19] kauditd_printk_skb: 56 callbacks suppressed
	[ +11.623596] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 7 12:21] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.388717] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [2a4c6b918b992218642a5c23ba37f0d311a2ee3742ca43c69121eacefce5629f] <==
	{"level":"warn","ts":"2024-10-07T12:09:32.479495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.838644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-84c5f94fbc-hglsg.17fc2a644bb654d6\" ","response":"range_response_count:1 size:816"}
	{"level":"info","ts":"2024-10-07T12:09:32.480557Z","caller":"traceutil/trace.go:171","msg":"trace[485250044] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-84c5f94fbc-hglsg.17fc2a644bb654d6; range_end:; response_count:1; response_revision:933; }","duration":"159.912705ms","start":"2024-10-07T12:09:32.320627Z","end":"2024-10-07T12:09:32.480540Z","steps":["trace[485250044] 'agreement among raft nodes before linearized reading'  (duration: 158.778048ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:09:39.769340Z","caller":"traceutil/trace.go:171","msg":"trace[524296785] linearizableReadLoop","detail":"{readStateIndex:1008; appliedIndex:1007; }","duration":"106.275938ms","start":"2024-10-07T12:09:39.663051Z","end":"2024-10-07T12:09:39.769327Z","steps":["trace[524296785] 'read index received'  (duration: 106.105892ms)","trace[524296785] 'applied index is now lower than readState.Index'  (duration: 169.699µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T12:09:39.769560Z","caller":"traceutil/trace.go:171","msg":"trace[1506762392] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"128.776273ms","start":"2024-10-07T12:09:39.640770Z","end":"2024-10-07T12:09:39.769546Z","steps":["trace[1506762392] 'process raft request'  (duration: 128.434609ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:09:39.769719Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.674267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-b6l97\" ","response":"range_response_count:1 size:4428"}
	{"level":"info","ts":"2024-10-07T12:09:39.769743Z","caller":"traceutil/trace.go:171","msg":"trace[1883260793] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-b6l97; range_end:; response_count:1; response_revision:981; }","duration":"106.705367ms","start":"2024-10-07T12:09:39.663027Z","end":"2024-10-07T12:09:39.769732Z","steps":["trace[1883260793] 'agreement among raft nodes before linearized reading'  (duration: 106.624082ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:10:04.176115Z","caller":"traceutil/trace.go:171","msg":"trace[524432788] transaction","detail":"{read_only:false; response_revision:1086; number_of_response:1; }","duration":"229.060096ms","start":"2024-10-07T12:10:03.947034Z","end":"2024-10-07T12:10:04.176094Z","steps":["trace[524432788] 'process raft request'  (duration: 228.959339ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:10:06.842382Z","caller":"traceutil/trace.go:171","msg":"trace[1294435642] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"163.728637ms","start":"2024-10-07T12:10:06.678639Z","end":"2024-10-07T12:10:06.842368Z","steps":["trace[1294435642] 'process raft request'  (duration: 163.566042ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:18:34.317351Z","caller":"traceutil/trace.go:171","msg":"trace[1481167046] linearizableReadLoop","detail":"{readStateIndex:2119; appliedIndex:2118; }","duration":"161.987391ms","start":"2024-10-07T12:18:34.155323Z","end":"2024-10-07T12:18:34.317310Z","steps":["trace[1481167046] 'read index received'  (duration: 161.839663ms)","trace[1481167046] 'applied index is now lower than readState.Index'  (duration: 147.204µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T12:18:34.317584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.211101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:18:34.317584Z","caller":"traceutil/trace.go:171","msg":"trace[914235934] transaction","detail":"{read_only:false; response_revision:1974; number_of_response:1; }","duration":"350.849152ms","start":"2024-10-07T12:18:33.966715Z","end":"2024-10-07T12:18:34.317564Z","steps":["trace[914235934] 'process raft request'  (duration: 350.487752ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:18:34.317625Z","caller":"traceutil/trace.go:171","msg":"trace[881179841] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1974; }","duration":"162.313504ms","start":"2024-10-07T12:18:34.155300Z","end":"2024-10-07T12:18:34.317614Z","steps":["trace[881179841] 'agreement among raft nodes before linearized reading'  (duration: 162.191651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:18:34.317712Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:18:33.966696Z","time spent":"350.928413ms","remote":"127.0.0.1:46292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1969 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-07T12:18:37.162496Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1477}
	{"level":"info","ts":"2024-10-07T12:18:37.200381Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1477,"took":"37.323171ms","hash":1761905209,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3284992,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-10-07T12:18:37.200442Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1761905209,"revision":1477,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T12:19:00.195583Z","caller":"traceutil/trace.go:171","msg":"trace[666705281] linearizableReadLoop","detail":"{readStateIndex:2466; appliedIndex:2465; }","duration":"324.258174ms","start":"2024-10-07T12:18:59.871306Z","end":"2024-10-07T12:19:00.195565Z","steps":["trace[666705281] 'read index received'  (duration: 324.087547ms)","trace[666705281] 'applied index is now lower than readState.Index'  (duration: 170.004µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T12:19:00.196020Z","caller":"traceutil/trace.go:171","msg":"trace[1837867398] transaction","detail":"{read_only:false; response_revision:2310; number_of_response:1; }","duration":"348.798855ms","start":"2024-10-07T12:18:59.847206Z","end":"2024-10-07T12:19:00.196005Z","steps":["trace[1837867398] 'process raft request'  (duration: 348.244931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:19:00.196146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:18:59.847187Z","time spent":"348.888075ms","remote":"127.0.0.1:46298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4237,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211\" mod_revision:2309 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211\" value_size:4137 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211\" > >"}
	{"level":"warn","ts":"2024-10-07T12:19:00.196372Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.058044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-resizer-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T12:19:00.196424Z","caller":"traceutil/trace.go:171","msg":"trace[161671899] range","detail":"{range_begin:/registry/clusterroles/external-resizer-runner; range_end:; response_count:0; response_revision:2310; }","duration":"325.113051ms","start":"2024-10-07T12:18:59.871301Z","end":"2024-10-07T12:19:00.196414Z","steps":["trace[161671899] 'agreement among raft nodes before linearized reading'  (duration: 325.03141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T12:19:00.196452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T12:18:59.871261Z","time spent":"325.184062ms","remote":"127.0.0.1:46478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/external-resizer-runner\" "}
	{"level":"info","ts":"2024-10-07T12:23:37.179305Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2003}
	{"level":"info","ts":"2024-10-07T12:23:37.203978Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2003,"took":"23.435753ms","hash":182158484,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4493312,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-10-07T12:23:37.204114Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":182158484,"revision":2003,"compact-revision":1477}
	
	
	==> kernel <==
	 12:23:45 up 15 min,  0 users,  load average: 0.06, 0.44, 0.42
	Linux addons-054971 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e53eb6f322c1bcee51fb3a1b82c4be991c8499e37602a4b2a9136cf7ea4ed9ee] <==
	E1007 12:19:07.218416       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:08.235223       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:09.243158       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:10.249851       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:11.258544       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:12.266593       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:13.273681       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:13.487744       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:14.281336       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:15.289191       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 12:19:16.080985       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.43.169"}
	E1007 12:19:16.298977       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:17.309963       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:18.322289       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:19.331556       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:20.344372       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:21.352867       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:22.361500       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:23.368345       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:24.376297       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:25.384774       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:26.393403       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:27.401849       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 12:19:28.411603       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 12:21:03.065415       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.144.130"}
	
	
	==> kube-controller-manager [51d1c23abfaa0341cc45635bae703689c6154364607a926ecd4fac0772271a66] <==
	I1007 12:21:18.011400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-054971"
	W1007 12:21:50.170206       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:21:50.170596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:21:51.697599       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:21:51.697665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:21:52.368942       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:21:52.369050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:22:02.670113       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:22:02.670200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:22:30.622656       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:22:30.622764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:22:40.422357       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:22:40.422497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:22:49.042162       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:22:49.042288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:22:50.989994       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:22:50.990030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:23:16.648283       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:23:16.648323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:23:22.911264       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:23:22.911475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:23:32.216835       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:23:32.216964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 12:23:35.847854       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 12:23:35.848066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [ae25a8ac9ad8c3bcfc48f5a49adabcb2e59e65af1f875f8ef4c29bf8ede677b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:08:48.836002       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:08:48.848853       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.62"]
	E1007 12:08:48.848989       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:08:48.932112       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:08:48.932173       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:08:48.932211       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:08:48.935461       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:08:48.935852       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:08:48.935881       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:08:48.939307       1 config.go:199] "Starting service config controller"
	I1007 12:08:48.939348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:08:48.939562       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:08:48.939593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:08:48.940551       1 config.go:328] "Starting node config controller"
	I1007 12:08:48.940586       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:08:49.042090       1 shared_informer.go:320] Caches are synced for node config
	I1007 12:08:49.042154       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:08:49.042187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2c09712050f97e076b324f8d548ec44872fd4ff933eee58abc3f86297ffd6ff3] <==
	W1007 12:08:39.554082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:08:39.554205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.591059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:08:39.591095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.681698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:08:39.681773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.689750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:08:39.689806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.752513       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:08:39.752623       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 12:08:39.792454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 12:08:39.792508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.824543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:08:39.824772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.905749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:08:39.905849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:39.979811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:08:39.979977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:40.015765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 12:08:40.015978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:40.062889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:08:40.062981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:08:40.095519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:08:40.095576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:08:42.603506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:22:41 addons-054971 kubelet[1213]: E1007 12:22:41.598770    1213 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:22:41 addons-054971 kubelet[1213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:22:41 addons-054971 kubelet[1213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:22:41 addons-054971 kubelet[1213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:22:41 addons-054971 kubelet[1213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:22:42 addons-054971 kubelet[1213]: E1007 12:22:42.035839    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303762035090885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:22:42 addons-054971 kubelet[1213]: E1007 12:22:42.035999    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303762035090885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:22:52 addons-054971 kubelet[1213]: E1007 12:22:52.039059    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303772038691950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:22:52 addons-054971 kubelet[1213]: E1007 12:22:52.039108    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303772038691950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:02 addons-054971 kubelet[1213]: E1007 12:23:02.041331    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303782040880437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:02 addons-054971 kubelet[1213]: E1007 12:23:02.041467    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303782040880437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:12 addons-054971 kubelet[1213]: E1007 12:23:12.043622    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303792043197245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:12 addons-054971 kubelet[1213]: E1007 12:23:12.043666    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303792043197245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:22 addons-054971 kubelet[1213]: E1007 12:23:22.046492    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303802046093942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:22 addons-054971 kubelet[1213]: E1007 12:23:22.046538    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303802046093942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:24 addons-054971 kubelet[1213]: I1007 12:23:24.583571    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:23:32 addons-054971 kubelet[1213]: E1007 12:23:32.049283    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303812048759662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:32 addons-054971 kubelet[1213]: E1007 12:23:32.049650    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303812048759662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:41 addons-054971 kubelet[1213]: E1007 12:23:41.599273    1213 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:23:41 addons-054971 kubelet[1213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:23:41 addons-054971 kubelet[1213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:23:41 addons-054971 kubelet[1213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:23:41 addons-054971 kubelet[1213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:23:42 addons-054971 kubelet[1213]: E1007 12:23:42.055169    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303822054500037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:42 addons-054971 kubelet[1213]: E1007 12:23:42.055203    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303822054500037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6290bd3b1143e9ce9e272592ee47f6e27811d677cecd27ed7b3b69cd9136b947] <==
	I1007 12:08:53.320716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 12:08:53.349407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 12:08:53.349488       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 12:08:53.441191       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 12:08:53.441391       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-054971_1768f3c0-2985-4ba7-9c01-071c079b3114!
	I1007 12:08:53.443370       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11aad06b-f705-4064-939b-c915d161912b", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-054971_1768f3c0-2985-4ba7-9c01-071c079b3114 became leader
	I1007 12:08:53.547214       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-054971_1768f3c0-2985-4ba7-9c01-071c079b3114!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-054971 -n addons-054971
helpers_test.go:261: (dbg) Run:  kubectl --context addons-054971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (321.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-054971
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-054971: exit status 82 (2m0.498593256s)

                                                
                                                
-- stdout --
	* Stopping node "addons-054971"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-054971" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-054971
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-054971: exit status 11 (21.519171145s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-054971" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-054971
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-054971: exit status 11 (6.143597114s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-054971" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-054971
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-054971: exit status 11 (6.144073828s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-054971" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 node stop m02 -v=7 --alsologtostderr
E1007 12:36:15.388590  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:37:37.310109  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-053933 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.500662172s)

                                                
                                                
-- stdout --
	* Stopping node "ha-053933-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:35:57.431102  770326 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:35:57.431257  770326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:35:57.431272  770326 out.go:358] Setting ErrFile to fd 2...
	I1007 12:35:57.431280  770326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:35:57.431485  770326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:35:57.431775  770326 mustload.go:65] Loading cluster: ha-053933
	I1007 12:35:57.432254  770326 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:35:57.432278  770326 stop.go:39] StopHost: ha-053933-m02
	I1007 12:35:57.432656  770326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:35:57.432720  770326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:35:57.449319  770326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I1007 12:35:57.449862  770326 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:35:57.450432  770326 main.go:141] libmachine: Using API Version  1
	I1007 12:35:57.450458  770326 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:35:57.450833  770326 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:35:57.453318  770326 out.go:177] * Stopping node "ha-053933-m02"  ...
	I1007 12:35:57.454760  770326 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:35:57.454801  770326 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:35:57.455151  770326 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:35:57.455192  770326 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:35:57.458499  770326 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:35:57.458941  770326 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:35:57.458978  770326 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:35:57.459176  770326 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:35:57.459369  770326 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:35:57.459521  770326 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:35:57.459670  770326 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:35:57.551123  770326 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:35:57.606202  770326 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:35:57.662738  770326 main.go:141] libmachine: Stopping "ha-053933-m02"...
	I1007 12:35:57.662769  770326 main.go:141] libmachine: (ha-053933-m02) Calling .GetState
	I1007 12:35:57.664307  770326 main.go:141] libmachine: (ha-053933-m02) Calling .Stop
	I1007 12:35:57.668336  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 0/120
	I1007 12:35:58.669794  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 1/120
	I1007 12:35:59.671078  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 2/120
	I1007 12:36:00.672666  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 3/120
	I1007 12:36:01.673971  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 4/120
	I1007 12:36:02.676046  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 5/120
	I1007 12:36:03.677697  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 6/120
	I1007 12:36:04.679549  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 7/120
	I1007 12:36:05.681098  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 8/120
	I1007 12:36:06.682734  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 9/120
	I1007 12:36:07.684748  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 10/120
	I1007 12:36:08.686432  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 11/120
	I1007 12:36:09.688298  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 12/120
	I1007 12:36:10.690924  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 13/120
	I1007 12:36:11.692680  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 14/120
	I1007 12:36:12.695419  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 15/120
	I1007 12:36:13.697862  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 16/120
	I1007 12:36:14.699445  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 17/120
	I1007 12:36:15.700714  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 18/120
	I1007 12:36:16.701988  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 19/120
	I1007 12:36:17.703349  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 20/120
	I1007 12:36:18.704811  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 21/120
	I1007 12:36:19.707046  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 22/120
	I1007 12:36:20.708646  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 23/120
	I1007 12:36:21.710126  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 24/120
	I1007 12:36:22.711712  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 25/120
	I1007 12:36:23.713539  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 26/120
	I1007 12:36:24.714918  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 27/120
	I1007 12:36:25.716451  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 28/120
	I1007 12:36:26.718440  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 29/120
	I1007 12:36:27.720599  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 30/120
	I1007 12:36:28.721946  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 31/120
	I1007 12:36:29.723529  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 32/120
	I1007 12:36:30.724925  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 33/120
	I1007 12:36:31.726331  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 34/120
	I1007 12:36:32.728416  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 35/120
	I1007 12:36:33.729994  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 36/120
	I1007 12:36:34.732118  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 37/120
	I1007 12:36:35.733481  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 38/120
	I1007 12:36:36.735049  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 39/120
	I1007 12:36:37.737328  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 40/120
	I1007 12:36:38.738886  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 41/120
	I1007 12:36:39.740630  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 42/120
	I1007 12:36:40.742484  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 43/120
	I1007 12:36:41.744140  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 44/120
	I1007 12:36:42.746328  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 45/120
	I1007 12:36:43.747844  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 46/120
	I1007 12:36:44.749358  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 47/120
	I1007 12:36:45.751309  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 48/120
	I1007 12:36:46.752642  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 49/120
	I1007 12:36:47.755497  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 50/120
	I1007 12:36:48.757198  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 51/120
	I1007 12:36:49.758523  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 52/120
	I1007 12:36:50.760075  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 53/120
	I1007 12:36:51.761711  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 54/120
	I1007 12:36:52.764094  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 55/120
	I1007 12:36:53.765515  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 56/120
	I1007 12:36:54.767485  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 57/120
	I1007 12:36:55.769041  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 58/120
	I1007 12:36:56.770308  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 59/120
	I1007 12:36:57.772603  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 60/120
	I1007 12:36:58.774046  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 61/120
	I1007 12:36:59.775367  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 62/120
	I1007 12:37:00.776924  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 63/120
	I1007 12:37:01.778267  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 64/120
	I1007 12:37:02.780498  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 65/120
	I1007 12:37:03.782640  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 66/120
	I1007 12:37:04.784148  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 67/120
	I1007 12:37:05.785397  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 68/120
	I1007 12:37:06.786879  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 69/120
	I1007 12:37:07.788183  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 70/120
	I1007 12:37:08.790207  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 71/120
	I1007 12:37:09.792604  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 72/120
	I1007 12:37:10.793919  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 73/120
	I1007 12:37:11.795430  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 74/120
	I1007 12:37:12.797795  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 75/120
	I1007 12:37:13.799535  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 76/120
	I1007 12:37:14.800981  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 77/120
	I1007 12:37:15.802481  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 78/120
	I1007 12:37:16.804508  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 79/120
	I1007 12:37:17.806839  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 80/120
	I1007 12:37:18.808513  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 81/120
	I1007 12:37:19.809668  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 82/120
	I1007 12:37:20.811266  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 83/120
	I1007 12:37:21.812610  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 84/120
	I1007 12:37:22.815199  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 85/120
	I1007 12:37:23.816783  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 86/120
	I1007 12:37:24.818711  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 87/120
	I1007 12:37:25.820131  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 88/120
	I1007 12:37:26.821793  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 89/120
	I1007 12:37:27.823641  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 90/120
	I1007 12:37:28.825086  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 91/120
	I1007 12:37:29.826509  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 92/120
	I1007 12:37:30.828661  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 93/120
	I1007 12:37:31.830123  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 94/120
	I1007 12:37:32.832435  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 95/120
	I1007 12:37:33.834075  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 96/120
	I1007 12:37:34.835745  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 97/120
	I1007 12:37:35.837416  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 98/120
	I1007 12:37:36.839261  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 99/120
	I1007 12:37:37.841574  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 100/120
	I1007 12:37:38.842874  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 101/120
	I1007 12:37:39.844612  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 102/120
	I1007 12:37:40.846113  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 103/120
	I1007 12:37:41.847444  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 104/120
	I1007 12:37:42.849661  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 105/120
	I1007 12:37:43.851153  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 106/120
	I1007 12:37:44.852618  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 107/120
	I1007 12:37:45.854093  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 108/120
	I1007 12:37:46.855539  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 109/120
	I1007 12:37:47.857785  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 110/120
	I1007 12:37:48.859324  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 111/120
	I1007 12:37:49.860732  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 112/120
	I1007 12:37:50.862784  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 113/120
	I1007 12:37:51.864956  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 114/120
	I1007 12:37:52.867609  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 115/120
	I1007 12:37:53.868921  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 116/120
	I1007 12:37:54.870278  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 117/120
	I1007 12:37:55.872863  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 118/120
	I1007 12:37:56.874227  770326 main.go:141] libmachine: (ha-053933-m02) Waiting for machine to stop 119/120
	I1007 12:37:57.875662  770326 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 12:37:57.875864  770326 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-053933 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr: (18.81428078s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-053933 -n ha-053933
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 logs -n 25: (1.553530864s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m03_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m04 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp testdata/cp-test.txt                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m03 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-053933 node stop m02 -v=7                                                   | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:31:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:31:18.148064  766330 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:31:18.148178  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148182  766330 out.go:358] Setting ErrFile to fd 2...
	I1007 12:31:18.148187  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148357  766330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:31:18.148967  766330 out.go:352] Setting JSON to false
	I1007 12:31:18.149958  766330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8027,"bootTime":1728296251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:31:18.150102  766330 start.go:139] virtualization: kvm guest
	I1007 12:31:18.152485  766330 out.go:177] * [ha-053933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:31:18.154248  766330 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:31:18.154296  766330 notify.go:220] Checking for updates...
	I1007 12:31:18.157253  766330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:31:18.159046  766330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:31:18.160370  766330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.161706  766330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:31:18.163112  766330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:31:18.164841  766330 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:31:18.202110  766330 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:31:18.203531  766330 start.go:297] selected driver: kvm2
	I1007 12:31:18.203547  766330 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:31:18.203562  766330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:31:18.204518  766330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.204603  766330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:31:18.220705  766330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:31:18.220766  766330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:31:18.221021  766330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:31:18.221059  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:18.221106  766330 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:31:18.221116  766330 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:31:18.221169  766330 start.go:340] cluster config:
	{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:18.221279  766330 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.223403  766330 out.go:177] * Starting "ha-053933" primary control-plane node in "ha-053933" cluster
	I1007 12:31:18.224688  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:18.224749  766330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:31:18.224761  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:31:18.224844  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:31:18.224857  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:31:18.225188  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:18.225228  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json: {Name:mk42211822a040c72189a8c96b9ffb20916f09bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:18.225385  766330 start.go:360] acquireMachinesLock for ha-053933: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:31:18.225414  766330 start.go:364] duration metric: took 16.211µs to acquireMachinesLock for "ha-053933"
	I1007 12:31:18.225431  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:31:18.225482  766330 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:31:18.227000  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:31:18.227165  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:18.227217  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:18.241971  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1007 12:31:18.242468  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:18.243060  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:31:18.243086  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:18.243440  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:18.243664  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:18.243802  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:18.243958  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:31:18.243992  766330 client.go:168] LocalClient.Create starting
	I1007 12:31:18.244024  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:31:18.244058  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244073  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244137  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:31:18.244157  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244173  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244190  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:31:18.244198  766330 main.go:141] libmachine: (ha-053933) Calling .PreCreateCheck
	I1007 12:31:18.244526  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:18.244944  766330 main.go:141] libmachine: Creating machine...
	I1007 12:31:18.244959  766330 main.go:141] libmachine: (ha-053933) Calling .Create
	I1007 12:31:18.245125  766330 main.go:141] libmachine: (ha-053933) Creating KVM machine...
	I1007 12:31:18.246330  766330 main.go:141] libmachine: (ha-053933) DBG | found existing default KVM network
	I1007 12:31:18.247162  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.246970  766353 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1007 12:31:18.247250  766330 main.go:141] libmachine: (ha-053933) DBG | created network xml: 
	I1007 12:31:18.247277  766330 main.go:141] libmachine: (ha-053933) DBG | <network>
	I1007 12:31:18.247291  766330 main.go:141] libmachine: (ha-053933) DBG |   <name>mk-ha-053933</name>
	I1007 12:31:18.247307  766330 main.go:141] libmachine: (ha-053933) DBG |   <dns enable='no'/>
	I1007 12:31:18.247318  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247331  766330 main.go:141] libmachine: (ha-053933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:31:18.247341  766330 main.go:141] libmachine: (ha-053933) DBG |     <dhcp>
	I1007 12:31:18.247353  766330 main.go:141] libmachine: (ha-053933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:31:18.247366  766330 main.go:141] libmachine: (ha-053933) DBG |     </dhcp>
	I1007 12:31:18.247382  766330 main.go:141] libmachine: (ha-053933) DBG |   </ip>
	I1007 12:31:18.247394  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247403  766330 main.go:141] libmachine: (ha-053933) DBG | </network>
	I1007 12:31:18.247414  766330 main.go:141] libmachine: (ha-053933) DBG | 
	I1007 12:31:18.252550  766330 main.go:141] libmachine: (ha-053933) DBG | trying to create private KVM network mk-ha-053933 192.168.39.0/24...
	I1007 12:31:18.323012  766330 main.go:141] libmachine: (ha-053933) DBG | private KVM network mk-ha-053933 192.168.39.0/24 created
	I1007 12:31:18.323051  766330 main.go:141] libmachine: (ha-053933) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.323065  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.322988  766353 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.323078  766330 main.go:141] libmachine: (ha-053933) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:31:18.323220  766330 main.go:141] libmachine: (ha-053933) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:31:18.600250  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.600066  766353 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa...
	I1007 12:31:18.865018  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864813  766353 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk...
	I1007 12:31:18.865057  766330 main.go:141] libmachine: (ha-053933) DBG | Writing magic tar header
	I1007 12:31:18.865071  766330 main.go:141] libmachine: (ha-053933) DBG | Writing SSH key tar header
	I1007 12:31:18.865083  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864941  766353 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.865103  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933
	I1007 12:31:18.865116  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 (perms=drwx------)
	I1007 12:31:18.865126  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:31:18.865135  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.865141  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:31:18.865149  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:31:18.865159  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:31:18.865166  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:31:18.865180  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home
	I1007 12:31:18.865192  766330 main.go:141] libmachine: (ha-053933) DBG | Skipping /home - not owner
	I1007 12:31:18.865206  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:31:18.865221  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:31:18.865229  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:31:18.865238  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:31:18.865245  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:18.866439  766330 main.go:141] libmachine: (ha-053933) define libvirt domain using xml: 
	I1007 12:31:18.866466  766330 main.go:141] libmachine: (ha-053933) <domain type='kvm'>
	I1007 12:31:18.866476  766330 main.go:141] libmachine: (ha-053933)   <name>ha-053933</name>
	I1007 12:31:18.866483  766330 main.go:141] libmachine: (ha-053933)   <memory unit='MiB'>2200</memory>
	I1007 12:31:18.866492  766330 main.go:141] libmachine: (ha-053933)   <vcpu>2</vcpu>
	I1007 12:31:18.866503  766330 main.go:141] libmachine: (ha-053933)   <features>
	I1007 12:31:18.866510  766330 main.go:141] libmachine: (ha-053933)     <acpi/>
	I1007 12:31:18.866520  766330 main.go:141] libmachine: (ha-053933)     <apic/>
	I1007 12:31:18.866530  766330 main.go:141] libmachine: (ha-053933)     <pae/>
	I1007 12:31:18.866546  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866569  766330 main.go:141] libmachine: (ha-053933)   </features>
	I1007 12:31:18.866589  766330 main.go:141] libmachine: (ha-053933)   <cpu mode='host-passthrough'>
	I1007 12:31:18.866598  766330 main.go:141] libmachine: (ha-053933)   
	I1007 12:31:18.866607  766330 main.go:141] libmachine: (ha-053933)   </cpu>
	I1007 12:31:18.866617  766330 main.go:141] libmachine: (ha-053933)   <os>
	I1007 12:31:18.866624  766330 main.go:141] libmachine: (ha-053933)     <type>hvm</type>
	I1007 12:31:18.866630  766330 main.go:141] libmachine: (ha-053933)     <boot dev='cdrom'/>
	I1007 12:31:18.866636  766330 main.go:141] libmachine: (ha-053933)     <boot dev='hd'/>
	I1007 12:31:18.866641  766330 main.go:141] libmachine: (ha-053933)     <bootmenu enable='no'/>
	I1007 12:31:18.866647  766330 main.go:141] libmachine: (ha-053933)   </os>
	I1007 12:31:18.866652  766330 main.go:141] libmachine: (ha-053933)   <devices>
	I1007 12:31:18.866659  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='cdrom'>
	I1007 12:31:18.866666  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/boot2docker.iso'/>
	I1007 12:31:18.866673  766330 main.go:141] libmachine: (ha-053933)       <target dev='hdc' bus='scsi'/>
	I1007 12:31:18.866678  766330 main.go:141] libmachine: (ha-053933)       <readonly/>
	I1007 12:31:18.866683  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866691  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='disk'>
	I1007 12:31:18.866702  766330 main.go:141] libmachine: (ha-053933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:31:18.866711  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk'/>
	I1007 12:31:18.866722  766330 main.go:141] libmachine: (ha-053933)       <target dev='hda' bus='virtio'/>
	I1007 12:31:18.866731  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866737  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866745  766330 main.go:141] libmachine: (ha-053933)       <source network='mk-ha-053933'/>
	I1007 12:31:18.866749  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866755  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866759  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866768  766330 main.go:141] libmachine: (ha-053933)       <source network='default'/>
	I1007 12:31:18.866775  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866780  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866786  766330 main.go:141] libmachine: (ha-053933)     <serial type='pty'>
	I1007 12:31:18.866791  766330 main.go:141] libmachine: (ha-053933)       <target port='0'/>
	I1007 12:31:18.866798  766330 main.go:141] libmachine: (ha-053933)     </serial>
	I1007 12:31:18.866802  766330 main.go:141] libmachine: (ha-053933)     <console type='pty'>
	I1007 12:31:18.866810  766330 main.go:141] libmachine: (ha-053933)       <target type='serial' port='0'/>
	I1007 12:31:18.866821  766330 main.go:141] libmachine: (ha-053933)     </console>
	I1007 12:31:18.866827  766330 main.go:141] libmachine: (ha-053933)     <rng model='virtio'>
	I1007 12:31:18.866834  766330 main.go:141] libmachine: (ha-053933)       <backend model='random'>/dev/random</backend>
	I1007 12:31:18.866840  766330 main.go:141] libmachine: (ha-053933)     </rng>
	I1007 12:31:18.866844  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866850  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866855  766330 main.go:141] libmachine: (ha-053933)   </devices>
	I1007 12:31:18.866860  766330 main.go:141] libmachine: (ha-053933) </domain>
	I1007 12:31:18.866868  766330 main.go:141] libmachine: (ha-053933) 
	I1007 12:31:18.871598  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:91:b8:36 in network default
	I1007 12:31:18.872268  766330 main.go:141] libmachine: (ha-053933) Ensuring networks are active...
	I1007 12:31:18.872288  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:18.873069  766330 main.go:141] libmachine: (ha-053933) Ensuring network default is active
	I1007 12:31:18.873363  766330 main.go:141] libmachine: (ha-053933) Ensuring network mk-ha-053933 is active
	I1007 12:31:18.873853  766330 main.go:141] libmachine: (ha-053933) Getting domain xml...
	I1007 12:31:18.874562  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:19.211616  766330 main.go:141] libmachine: (ha-053933) Waiting to get IP...
	I1007 12:31:19.212423  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.212778  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.212812  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.212764  766353 retry.go:31] will retry after 226.747121ms: waiting for machine to come up
	I1007 12:31:19.441331  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.441786  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.441837  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.441730  766353 retry.go:31] will retry after 274.527206ms: waiting for machine to come up
	I1007 12:31:19.718508  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.719027  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.719064  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.718969  766353 retry.go:31] will retry after 356.880394ms: waiting for machine to come up
	I1007 12:31:20.077626  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.078112  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.078145  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.078091  766353 retry.go:31] will retry after 415.686035ms: waiting for machine to come up
	I1007 12:31:20.495868  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.496297  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.496328  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.496232  766353 retry.go:31] will retry after 565.036299ms: waiting for machine to come up
	I1007 12:31:21.062533  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.063181  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.063212  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.063112  766353 retry.go:31] will retry after 934.304139ms: waiting for machine to come up
	I1007 12:31:21.999277  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.999729  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.999763  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.999684  766353 retry.go:31] will retry after 862.178533ms: waiting for machine to come up
	I1007 12:31:22.863123  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:22.863626  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:22.863658  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:22.863574  766353 retry.go:31] will retry after 1.201609733s: waiting for machine to come up
	I1007 12:31:24.066671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:24.067072  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:24.067104  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:24.067015  766353 retry.go:31] will retry after 1.419758916s: waiting for machine to come up
	I1007 12:31:25.488770  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:25.489216  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:25.489240  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:25.489182  766353 retry.go:31] will retry after 2.248635623s: waiting for machine to come up
	I1007 12:31:27.740776  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:27.741277  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:27.741301  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:27.741240  766353 retry.go:31] will retry after 1.919055927s: waiting for machine to come up
	I1007 12:31:29.662363  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:29.662857  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:29.663141  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:29.662878  766353 retry.go:31] will retry after 3.284332028s: waiting for machine to come up
	I1007 12:31:32.951614  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:32.952006  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:32.952134  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:32.951952  766353 retry.go:31] will retry after 3.413281695s: waiting for machine to come up
	I1007 12:31:36.369285  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:36.369674  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:36.369704  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:36.369624  766353 retry.go:31] will retry after 5.240968669s: waiting for machine to come up
	I1007 12:31:41.615028  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615539  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has current primary IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615555  766330 main.go:141] libmachine: (ha-053933) Found IP for machine: 192.168.39.152
	I1007 12:31:41.615563  766330 main.go:141] libmachine: (ha-053933) Reserving static IP address...
	I1007 12:31:41.615914  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "ha-053933", mac: "52:54:00:7e:91:1b", ip: "192.168.39.152"} in network mk-ha-053933
	I1007 12:31:41.698423  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:41.698453  766330 main.go:141] libmachine: (ha-053933) Reserved static IP address: 192.168.39.152
	I1007 12:31:41.698466  766330 main.go:141] libmachine: (ha-053933) Waiting for SSH to be available...
	I1007 12:31:41.701233  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.701575  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933
	I1007 12:31:41.701604  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:7e:91:1b
	I1007 12:31:41.701733  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:41.701762  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:41.701811  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:41.701844  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:41.701865  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:41.705812  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:31:41.705841  766330 main.go:141] libmachine: (ha-053933) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:31:41.705848  766330 main.go:141] libmachine: (ha-053933) DBG | command : exit 0
	I1007 12:31:41.705853  766330 main.go:141] libmachine: (ha-053933) DBG | err     : exit status 255
	I1007 12:31:41.705861  766330 main.go:141] libmachine: (ha-053933) DBG | output  : 
	I1007 12:31:44.706593  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:44.709072  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709617  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.709649  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709785  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:44.709814  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:44.709843  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:44.709856  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:44.709871  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:44.834399  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: <nil>: 
	I1007 12:31:44.834682  766330 main.go:141] libmachine: (ha-053933) KVM machine creation complete!
	I1007 12:31:44.834978  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:44.835619  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.835838  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.836043  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:31:44.836062  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:31:44.837184  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:31:44.837198  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:31:44.837203  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:31:44.837209  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.839398  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839807  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.839830  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839939  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.840108  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840281  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840429  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.840654  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.840918  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.840931  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:31:44.945582  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:44.945632  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:31:44.945644  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.948258  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948719  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.948754  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948921  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.949136  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949341  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949504  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.949690  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.949946  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.949963  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:31:45.055227  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:31:45.055350  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:31:45.055364  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:31:45.055378  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055638  766330 buildroot.go:166] provisioning hostname "ha-053933"
	I1007 12:31:45.055680  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055865  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.058671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059121  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.059156  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059299  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.059582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059753  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059896  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.060046  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.060230  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.060242  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933 && echo "ha-053933" | sudo tee /etc/hostname
	I1007 12:31:45.177180  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:31:45.177214  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.180205  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180610  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.180640  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.181104  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181275  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181434  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.181657  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.181837  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.181854  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:31:45.296167  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:45.296213  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:31:45.296262  766330 buildroot.go:174] setting up certificates
	I1007 12:31:45.296275  766330 provision.go:84] configureAuth start
	I1007 12:31:45.296287  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.296598  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.299370  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299721  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.299769  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.302528  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.302981  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.303013  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.303173  766330 provision.go:143] copyHostCerts
	I1007 12:31:45.303222  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303263  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:31:45.303285  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303361  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:31:45.303500  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303523  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:31:45.303528  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303559  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:31:45.303616  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303633  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:31:45.303637  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303657  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:31:45.303708  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933 san=[127.0.0.1 192.168.39.152 ha-053933 localhost minikube]
	I1007 12:31:45.422772  766330 provision.go:177] copyRemoteCerts
	I1007 12:31:45.422847  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:31:45.422884  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.426109  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426432  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.426461  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426620  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.426796  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.426987  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.427121  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.508256  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:31:45.508354  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:31:45.535023  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:31:45.535097  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:31:45.561047  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:31:45.561146  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:31:45.586470  766330 provision.go:87] duration metric: took 290.178076ms to configureAuth
	I1007 12:31:45.586509  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:31:45.586752  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:45.586838  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.589503  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.589873  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.589917  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.590215  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.590402  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590554  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590703  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.590899  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.591142  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.591160  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:31:45.816081  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:31:45.816125  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:31:45.816137  766330 main.go:141] libmachine: (ha-053933) Calling .GetURL
	I1007 12:31:45.817540  766330 main.go:141] libmachine: (ha-053933) DBG | Using libvirt version 6000000
	I1007 12:31:45.820289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820694  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.820725  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820851  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:31:45.820871  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:31:45.820882  766330 client.go:171] duration metric: took 27.576881663s to LocalClient.Create
	I1007 12:31:45.820914  766330 start.go:167] duration metric: took 27.57695761s to libmachine.API.Create "ha-053933"
	I1007 12:31:45.820939  766330 start.go:293] postStartSetup for "ha-053933" (driver="kvm2")
	I1007 12:31:45.820955  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:31:45.820986  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:45.821218  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:31:45.821261  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.823471  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.823791  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.823834  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.824015  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.824234  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.824403  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.824535  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.905405  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:31:45.910330  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:31:45.910363  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:31:45.910424  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:31:45.910498  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:31:45.910509  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:31:45.910617  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:31:45.921262  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:45.947335  766330 start.go:296] duration metric: took 126.377039ms for postStartSetup
	I1007 12:31:45.947395  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:45.948057  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.950566  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.950901  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.950931  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.951158  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:45.951337  766330 start.go:128] duration metric: took 27.725842508s to createHost
	I1007 12:31:45.951369  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.953682  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954057  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.954084  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954210  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.954414  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954585  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954727  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.954891  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.955077  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.955089  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:31:46.059048  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304306.039624942
	
	I1007 12:31:46.059075  766330 fix.go:216] guest clock: 1728304306.039624942
	I1007 12:31:46.059083  766330 fix.go:229] Guest: 2024-10-07 12:31:46.039624942 +0000 UTC Remote: 2024-10-07 12:31:45.951349706 +0000 UTC m=+27.845880248 (delta=88.275236ms)
	I1007 12:31:46.059106  766330 fix.go:200] guest clock delta is within tolerance: 88.275236ms
	I1007 12:31:46.059111  766330 start.go:83] releasing machines lock for "ha-053933", held for 27.833688154s
	I1007 12:31:46.059131  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.059394  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:46.062064  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062406  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.062431  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062578  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063106  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063318  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063436  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:31:46.063484  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.063563  766330 ssh_runner.go:195] Run: cat /version.json
	I1007 12:31:46.063582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.066118  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066393  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066431  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066454  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066641  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066729  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066762  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066811  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.066931  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066955  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067124  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.067115  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.067267  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067400  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.143506  766330 ssh_runner.go:195] Run: systemctl --version
	I1007 12:31:46.170858  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:31:46.332209  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:31:46.338580  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:31:46.338677  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:31:46.356826  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:31:46.356863  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:31:46.356954  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:31:46.374524  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:31:46.390007  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:31:46.390089  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:31:46.404935  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:31:46.420186  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:31:46.537561  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:31:46.724537  766330 docker.go:233] disabling docker service ...
	I1007 12:31:46.724631  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:31:46.740520  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:31:46.754710  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:31:46.868070  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:31:46.983211  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:31:46.998357  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:31:47.018646  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:31:47.018734  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.030677  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:31:47.030766  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.042531  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.053856  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.065763  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:31:47.077170  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.088459  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.106901  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.118161  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:31:47.128388  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:31:47.128462  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:31:47.142126  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:31:47.154515  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:47.283963  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:31:47.385321  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:31:47.385405  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:31:47.390485  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:31:47.390552  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:31:47.394825  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:31:47.439074  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:31:47.439187  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.469132  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.501636  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:31:47.503367  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:47.506449  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.506817  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:47.506859  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.507082  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:31:47.511597  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:47.525698  766330 kubeadm.go:883] updating cluster {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:31:47.525829  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:47.525874  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:47.561011  766330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:31:47.561094  766330 ssh_runner.go:195] Run: which lz4
	I1007 12:31:47.565196  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:31:47.565316  766330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:31:47.569571  766330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:31:47.569613  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:31:49.022834  766330 crio.go:462] duration metric: took 1.457534476s to copy over tarball
	I1007 12:31:49.022945  766330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:31:51.131868  766330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108889496s)
	I1007 12:31:51.131914  766330 crio.go:469] duration metric: took 2.109034387s to extract the tarball
	I1007 12:31:51.131926  766330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:31:51.169816  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:51.217403  766330 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:31:51.217431  766330 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:31:51.217440  766330 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.31.1 crio true true} ...
	I1007 12:31:51.217556  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:31:51.217655  766330 ssh_runner.go:195] Run: crio config
	I1007 12:31:51.271379  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:51.271408  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:31:51.271420  766330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:31:51.271445  766330 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-053933 NodeName:ha-053933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:31:51.271623  766330 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-053933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:31:51.271654  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:31:51.271699  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:31:51.289463  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:31:51.289607  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:31:51.289677  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:31:51.300325  766330 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:31:51.300403  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:31:51.311044  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:31:51.329552  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:31:51.347746  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:31:51.366188  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:31:51.384590  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:31:51.388865  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:51.402571  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:51.531092  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:31:51.550538  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.152
	I1007 12:31:51.550568  766330 certs.go:194] generating shared ca certs ...
	I1007 12:31:51.550589  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.550791  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:31:51.550844  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:31:51.550855  766330 certs.go:256] generating profile certs ...
	I1007 12:31:51.550949  766330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:31:51.550971  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt with IP's: []
	I1007 12:31:51.873489  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt ...
	I1007 12:31:51.873532  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt: {Name:mkf7b8a7f4d9827c14fd0fbc8bb02e2f79d65528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873758  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key ...
	I1007 12:31:51.873776  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key: {Name:mk6b5a827040be723c18ebdcd9fe7d1599565bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873894  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a
	I1007 12:31:51.873912  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.254]
	I1007 12:31:52.061549  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a ...
	I1007 12:31:52.061587  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a: {Name:mk1a012d659f1c8c4afc92ca485eba408eb37a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061787  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a ...
	I1007 12:31:52.061804  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a: {Name:mkb1195bd1ddd6ea78076dea0e840887aeae92ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061908  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:31:52.062012  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:31:52.062107  766330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:31:52.062125  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt with IP's: []
	I1007 12:31:52.119663  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt ...
	I1007 12:31:52.119698  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt: {Name:mkf6d674dcac47b878e8df13383f77bcf932d249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.119900  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key ...
	I1007 12:31:52.119913  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key: {Name:mk301510b9dc1296a9e7f127da3f0d4b86905808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.120033  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:31:52.120053  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:31:52.120064  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:31:52.120077  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:31:52.120087  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:31:52.120118  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:31:52.120142  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:31:52.120155  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:31:52.120209  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:31:52.120251  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:31:52.120261  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:31:52.120290  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:31:52.120312  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:31:52.120339  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:31:52.120379  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:52.120408  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.120422  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.120434  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.121128  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:31:52.149003  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:31:52.175017  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:31:52.201648  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:31:52.228352  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:31:52.255290  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:31:52.282215  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:31:52.309286  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:31:52.337694  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:31:52.366883  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:31:52.402754  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:31:52.430306  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:31:52.451397  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:31:52.458450  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:31:52.470676  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476879  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476941  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.483560  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:31:52.495531  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:31:52.507273  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512685  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512760  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.519035  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:31:52.530701  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:31:52.542163  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547093  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547169  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.553420  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:31:52.565081  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:31:52.569549  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:31:52.569630  766330 kubeadm.go:392] StartCluster: {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:52.569737  766330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:31:52.569800  766330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:31:52.613192  766330 cri.go:89] found id: ""
	I1007 12:31:52.613311  766330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:31:52.625713  766330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:31:52.636220  766330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:31:52.646590  766330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:31:52.646626  766330 kubeadm.go:157] found existing configuration files:
	
	I1007 12:31:52.646686  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:31:52.656870  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:31:52.656944  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:31:52.667467  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:31:52.677109  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:31:52.677186  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:31:52.687168  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.696969  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:31:52.697035  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.706604  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:31:52.716252  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:31:52.716325  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:31:52.726572  766330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:31:52.847487  766330 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:31:52.847581  766330 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:31:52.955260  766330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:31:52.955420  766330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:31:52.955545  766330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:31:52.964537  766330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:31:53.051755  766330 out.go:235]   - Generating certificates and keys ...
	I1007 12:31:53.051938  766330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:31:53.052035  766330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:31:53.320791  766330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:31:53.468201  766330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:31:53.842801  766330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:31:53.969642  766330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:31:54.101242  766330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:31:54.101440  766330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.456134  766330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:31:54.456354  766330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.521797  766330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:31:54.769778  766330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:31:55.125227  766330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:31:55.125448  766330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:31:55.361551  766330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:31:55.783698  766330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:31:56.057409  766330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:31:56.211507  766330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:31:56.348279  766330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:31:56.349002  766330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:31:56.353525  766330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:31:56.355620  766330 out.go:235]   - Booting up control plane ...
	I1007 12:31:56.355760  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:31:56.356147  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:31:56.356974  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:31:56.373175  766330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:31:56.381538  766330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:31:56.381594  766330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:31:56.521323  766330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:31:56.521511  766330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:31:57.022943  766330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.739695ms
	I1007 12:31:57.023054  766330 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:32:03.058810  766330 kubeadm.go:310] [api-check] The API server is healthy after 6.037121779s
	I1007 12:32:03.072819  766330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:32:03.101026  766330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:32:03.645977  766330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:32:03.646231  766330 kubeadm.go:310] [mark-control-plane] Marking the node ha-053933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:32:03.661217  766330 kubeadm.go:310] [bootstrap-token] Using token: ofkgus.681l1bfefmhh1xkb
	I1007 12:32:03.662957  766330 out.go:235]   - Configuring RBAC rules ...
	I1007 12:32:03.663116  766330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:32:03.674911  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:32:03.697863  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:32:03.703512  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:32:03.708092  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:32:03.713563  766330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:32:03.734636  766330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:32:03.997011  766330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:32:04.464216  766330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:32:04.465131  766330 kubeadm.go:310] 
	I1007 12:32:04.465191  766330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:32:04.465199  766330 kubeadm.go:310] 
	I1007 12:32:04.465336  766330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:32:04.465360  766330 kubeadm.go:310] 
	I1007 12:32:04.465394  766330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:32:04.465446  766330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:32:04.465491  766330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:32:04.465504  766330 kubeadm.go:310] 
	I1007 12:32:04.465572  766330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:32:04.465599  766330 kubeadm.go:310] 
	I1007 12:32:04.465644  766330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:32:04.465663  766330 kubeadm.go:310] 
	I1007 12:32:04.465719  766330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:32:04.465794  766330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:32:04.465885  766330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:32:04.465901  766330 kubeadm.go:310] 
	I1007 12:32:04.466075  766330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:32:04.466193  766330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:32:04.466201  766330 kubeadm.go:310] 
	I1007 12:32:04.466294  766330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466394  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 12:32:04.466415  766330 kubeadm.go:310] 	--control-plane 
	I1007 12:32:04.466421  766330 kubeadm.go:310] 
	I1007 12:32:04.466490  766330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:32:04.466497  766330 kubeadm.go:310] 
	I1007 12:32:04.466565  766330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466661  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 12:32:04.467760  766330 kubeadm.go:310] W1007 12:31:52.830915     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468039  766330 kubeadm.go:310] W1007 12:31:52.831996     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468166  766330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:32:04.468194  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:32:04.468205  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:32:04.470298  766330 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:32:04.471574  766330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:32:04.477802  766330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:32:04.477826  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:32:04.497072  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:32:04.906135  766330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:32:04.906201  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:04.906237  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933 minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=true
	I1007 12:32:05.063682  766330 ops.go:34] apiserver oom_adj: -16
	I1007 12:32:05.063698  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:05.564187  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.063920  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.563953  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.064483  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.564765  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.064739  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.564036  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.063899  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.198443  766330 kubeadm.go:1113] duration metric: took 4.292302963s to wait for elevateKubeSystemPrivileges
	I1007 12:32:09.198484  766330 kubeadm.go:394] duration metric: took 16.62887336s to StartCluster
	I1007 12:32:09.198511  766330 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.198603  766330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.199399  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.199661  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:32:09.199654  766330 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:09.199683  766330 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:32:09.199750  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:32:09.199769  766330 addons.go:69] Setting storage-provisioner=true in profile "ha-053933"
	I1007 12:32:09.199790  766330 addons.go:234] Setting addon storage-provisioner=true in "ha-053933"
	I1007 12:32:09.199789  766330 addons.go:69] Setting default-storageclass=true in profile "ha-053933"
	I1007 12:32:09.199827  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.199861  766330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-053933"
	I1007 12:32:09.199924  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:09.200250  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200297  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.200379  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200403  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.217502  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I1007 12:32:09.217554  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I1007 12:32:09.217985  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218145  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218593  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218622  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.218725  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218753  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.219006  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219124  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219326  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.219637  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.219691  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.221998  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.222368  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:32:09.223019  766330 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:32:09.223381  766330 addons.go:234] Setting addon default-storageclass=true in "ha-053933"
	I1007 12:32:09.223435  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.223846  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.223902  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.237604  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I1007 12:32:09.238161  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.238820  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.238847  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.239267  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.239621  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.242388  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.242754  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1007 12:32:09.243274  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.243977  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.244007  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.244396  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.244986  766330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:32:09.245068  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.245147  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.246976  766330 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.247004  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:32:09.247031  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.251289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.251823  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.251851  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.252064  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.252294  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.252448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.252580  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.263439  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1007 12:32:09.263833  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.264713  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.264733  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.265269  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.265519  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.267198  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.267411  766330 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:09.267431  766330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:32:09.267448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.271160  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.271638  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.271652  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.272078  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.272247  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.272388  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.272476  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.422833  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:32:09.443940  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.510999  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:10.102670  766330 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:32:10.350678  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350704  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.350784  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350815  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351026  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351046  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351056  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351063  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351128  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.351191  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351222  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351239  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351246  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.352633  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352653  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352669  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.352691  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352714  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352813  766330 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:32:10.352834  766330 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:32:10.352951  766330 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:32:10.352963  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.352974  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.352984  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.364518  766330 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:32:10.365197  766330 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:32:10.365213  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.365222  766330 round_trippers.go:473]     Content-Type: application/json
	I1007 12:32:10.365226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.365229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.368346  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:32:10.368537  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.368555  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.368875  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.368889  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.368895  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.371604  766330 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:32:10.373030  766330 addons.go:510] duration metric: took 1.173351959s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:32:10.373068  766330 start.go:246] waiting for cluster config update ...
	I1007 12:32:10.373085  766330 start.go:255] writing updated cluster config ...
	I1007 12:32:10.375098  766330 out.go:201] 
	I1007 12:32:10.377249  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:10.377439  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.379490  766330 out.go:177] * Starting "ha-053933-m02" control-plane node in "ha-053933" cluster
	I1007 12:32:10.381087  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:32:10.381130  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:32:10.381324  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:32:10.381339  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:32:10.381436  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.381664  766330 start.go:360] acquireMachinesLock for ha-053933-m02: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:32:10.381718  766330 start.go:364] duration metric: took 27.543µs to acquireMachinesLock for "ha-053933-m02"
	I1007 12:32:10.381752  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:10.381840  766330 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:32:10.383550  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:32:10.383680  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:10.383748  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:10.399329  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1007 12:32:10.399900  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:10.400460  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:10.400489  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:10.400855  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:10.401087  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:10.401325  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:10.401564  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:32:10.401597  766330 client.go:168] LocalClient.Create starting
	I1007 12:32:10.401634  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:32:10.401683  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401708  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401774  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:32:10.401806  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401824  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401883  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:32:10.401911  766330 main.go:141] libmachine: (ha-053933-m02) Calling .PreCreateCheck
	I1007 12:32:10.402163  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:10.402584  766330 main.go:141] libmachine: Creating machine...
	I1007 12:32:10.402602  766330 main.go:141] libmachine: (ha-053933-m02) Calling .Create
	I1007 12:32:10.402815  766330 main.go:141] libmachine: (ha-053933-m02) Creating KVM machine...
	I1007 12:32:10.404630  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing default KVM network
	I1007 12:32:10.404848  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing private KVM network mk-ha-053933
	I1007 12:32:10.405187  766330 main.go:141] libmachine: (ha-053933-m02) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.405209  766330 main.go:141] libmachine: (ha-053933-m02) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:32:10.405302  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.405168  766716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.405466  766330 main.go:141] libmachine: (ha-053933-m02) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:32:10.686269  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.686123  766716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa...
	I1007 12:32:10.953304  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953079  766716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk...
	I1007 12:32:10.953335  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing magic tar header
	I1007 12:32:10.953347  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing SSH key tar header
	I1007 12:32:10.953354  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953302  766716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.953491  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02
	I1007 12:32:10.953520  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 (perms=drwx------)
	I1007 12:32:10.953532  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:32:10.953546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.953559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:32:10.953567  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:32:10.953577  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:32:10.953583  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:32:10.953594  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:32:10.953602  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:32:10.953610  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:32:10.953626  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:10.953639  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:32:10.953649  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home
	I1007 12:32:10.953661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Skipping /home - not owner
	I1007 12:32:10.954892  766330 main.go:141] libmachine: (ha-053933-m02) define libvirt domain using xml: 
	I1007 12:32:10.954919  766330 main.go:141] libmachine: (ha-053933-m02) <domain type='kvm'>
	I1007 12:32:10.954926  766330 main.go:141] libmachine: (ha-053933-m02)   <name>ha-053933-m02</name>
	I1007 12:32:10.954934  766330 main.go:141] libmachine: (ha-053933-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:32:10.954971  766330 main.go:141] libmachine: (ha-053933-m02)   <vcpu>2</vcpu>
	I1007 12:32:10.954998  766330 main.go:141] libmachine: (ha-053933-m02)   <features>
	I1007 12:32:10.955008  766330 main.go:141] libmachine: (ha-053933-m02)     <acpi/>
	I1007 12:32:10.955019  766330 main.go:141] libmachine: (ha-053933-m02)     <apic/>
	I1007 12:32:10.955028  766330 main.go:141] libmachine: (ha-053933-m02)     <pae/>
	I1007 12:32:10.955038  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955048  766330 main.go:141] libmachine: (ha-053933-m02)   </features>
	I1007 12:32:10.955059  766330 main.go:141] libmachine: (ha-053933-m02)   <cpu mode='host-passthrough'>
	I1007 12:32:10.955086  766330 main.go:141] libmachine: (ha-053933-m02)   
	I1007 12:32:10.955107  766330 main.go:141] libmachine: (ha-053933-m02)   </cpu>
	I1007 12:32:10.955118  766330 main.go:141] libmachine: (ha-053933-m02)   <os>
	I1007 12:32:10.955130  766330 main.go:141] libmachine: (ha-053933-m02)     <type>hvm</type>
	I1007 12:32:10.955144  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='cdrom'/>
	I1007 12:32:10.955153  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='hd'/>
	I1007 12:32:10.955164  766330 main.go:141] libmachine: (ha-053933-m02)     <bootmenu enable='no'/>
	I1007 12:32:10.955170  766330 main.go:141] libmachine: (ha-053933-m02)   </os>
	I1007 12:32:10.955176  766330 main.go:141] libmachine: (ha-053933-m02)   <devices>
	I1007 12:32:10.955183  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='cdrom'>
	I1007 12:32:10.955199  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/boot2docker.iso'/>
	I1007 12:32:10.955214  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:32:10.955226  766330 main.go:141] libmachine: (ha-053933-m02)       <readonly/>
	I1007 12:32:10.955236  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955247  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='disk'>
	I1007 12:32:10.955259  766330 main.go:141] libmachine: (ha-053933-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:32:10.955273  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk'/>
	I1007 12:32:10.955284  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:32:10.955295  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955317  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955337  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='mk-ha-053933'/>
	I1007 12:32:10.955355  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955372  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955385  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955397  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='default'/>
	I1007 12:32:10.955410  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955419  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955429  766330 main.go:141] libmachine: (ha-053933-m02)     <serial type='pty'>
	I1007 12:32:10.955444  766330 main.go:141] libmachine: (ha-053933-m02)       <target port='0'/>
	I1007 12:32:10.955456  766330 main.go:141] libmachine: (ha-053933-m02)     </serial>
	I1007 12:32:10.955483  766330 main.go:141] libmachine: (ha-053933-m02)     <console type='pty'>
	I1007 12:32:10.955500  766330 main.go:141] libmachine: (ha-053933-m02)       <target type='serial' port='0'/>
	I1007 12:32:10.955516  766330 main.go:141] libmachine: (ha-053933-m02)     </console>
	I1007 12:32:10.955528  766330 main.go:141] libmachine: (ha-053933-m02)     <rng model='virtio'>
	I1007 12:32:10.955541  766330 main.go:141] libmachine: (ha-053933-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:32:10.955552  766330 main.go:141] libmachine: (ha-053933-m02)     </rng>
	I1007 12:32:10.955562  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955574  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955588  766330 main.go:141] libmachine: (ha-053933-m02)   </devices>
	I1007 12:32:10.955599  766330 main.go:141] libmachine: (ha-053933-m02) </domain>
	I1007 12:32:10.955606  766330 main.go:141] libmachine: (ha-053933-m02) 
	I1007 12:32:10.964084  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:92:85:a0 in network default
	I1007 12:32:10.964913  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring networks are active...
	I1007 12:32:10.964943  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:10.966004  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network default is active
	I1007 12:32:10.966794  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network mk-ha-053933 is active
	I1007 12:32:10.967567  766330 main.go:141] libmachine: (ha-053933-m02) Getting domain xml...
	I1007 12:32:10.968704  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:11.328435  766330 main.go:141] libmachine: (ha-053933-m02) Waiting to get IP...
	I1007 12:32:11.329255  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.329657  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.329684  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.329635  766716 retry.go:31] will retry after 304.626046ms: waiting for machine to come up
	I1007 12:32:11.636452  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.636889  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.636919  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.636838  766716 retry.go:31] will retry after 276.587443ms: waiting for machine to come up
	I1007 12:32:11.915507  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.915953  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.915981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.915913  766716 retry.go:31] will retry after 337.132979ms: waiting for machine to come up
	I1007 12:32:12.254562  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.255006  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.255031  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.254957  766716 retry.go:31] will retry after 414.173139ms: waiting for machine to come up
	I1007 12:32:12.670554  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.670981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.671027  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.670964  766716 retry.go:31] will retry after 736.75735ms: waiting for machine to come up
	I1007 12:32:13.409001  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:13.409465  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:13.409492  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:13.409419  766716 retry.go:31] will retry after 877.012423ms: waiting for machine to come up
	I1007 12:32:14.288329  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:14.288723  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:14.288753  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:14.288684  766716 retry.go:31] will retry after 1.037556164s: waiting for machine to come up
	I1007 12:32:15.327401  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:15.327809  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:15.327836  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:15.327768  766716 retry.go:31] will retry after 1.075590546s: waiting for machine to come up
	I1007 12:32:16.404635  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:16.405141  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:16.405170  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:16.405088  766716 retry.go:31] will retry after 1.694642723s: waiting for machine to come up
	I1007 12:32:18.101812  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:18.102290  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:18.102307  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:18.102257  766716 retry.go:31] will retry after 2.246296895s: waiting for machine to come up
	I1007 12:32:20.351742  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:20.352251  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:20.352273  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:20.352201  766716 retry.go:31] will retry after 2.298110151s: waiting for machine to come up
	I1007 12:32:22.653604  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:22.654280  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:22.654305  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:22.654158  766716 retry.go:31] will retry after 3.347094149s: waiting for machine to come up
	I1007 12:32:26.003104  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:26.003592  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:26.003618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:26.003545  766716 retry.go:31] will retry after 3.946300567s: waiting for machine to come up
	I1007 12:32:29.951184  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:29.951661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:29.951683  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:29.951615  766716 retry.go:31] will retry after 4.942604939s: waiting for machine to come up
	I1007 12:32:34.900038  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900804  766330 main.go:141] libmachine: (ha-053933-m02) Found IP for machine: 192.168.39.227
	I1007 12:32:34.900839  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900847  766330 main.go:141] libmachine: (ha-053933-m02) Reserving static IP address...
	I1007 12:32:34.901345  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "ha-053933-m02", mac: "52:54:00:e8:71:ec", ip: "192.168.39.227"} in network mk-ha-053933
	I1007 12:32:34.989559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:34.989593  766330 main.go:141] libmachine: (ha-053933-m02) Reserved static IP address: 192.168.39.227
	I1007 12:32:34.989607  766330 main.go:141] libmachine: (ha-053933-m02) Waiting for SSH to be available...
	I1007 12:32:34.993000  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.993348  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933
	I1007 12:32:34.993372  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:e8:71:ec
	I1007 12:32:34.993535  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:34.993565  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:34.993595  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:34.993608  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:34.993625  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:34.997438  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:32:34.997462  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:32:34.997471  766330 main.go:141] libmachine: (ha-053933-m02) DBG | command : exit 0
	I1007 12:32:34.997493  766330 main.go:141] libmachine: (ha-053933-m02) DBG | err     : exit status 255
	I1007 12:32:34.997502  766330 main.go:141] libmachine: (ha-053933-m02) DBG | output  : 
	I1007 12:32:38.000138  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:38.003563  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.003934  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.003965  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.004068  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:38.004097  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:38.004133  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:38.004156  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:38.004198  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:38.134356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:32:38.134575  766330 main.go:141] libmachine: (ha-053933-m02) KVM machine creation complete!
	I1007 12:32:38.134919  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:38.135497  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135718  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135838  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:32:38.135854  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetState
	I1007 12:32:38.137125  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:32:38.137139  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:32:38.137144  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:32:38.137149  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.139531  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140008  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.140029  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140173  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.140353  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140459  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.140739  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.140945  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.140955  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:32:38.245844  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.245874  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:32:38.245883  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.249067  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249461  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.249482  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249773  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.249996  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250184  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250363  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.250493  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.250691  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.250704  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:32:38.363524  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:32:38.363625  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:32:38.363640  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:32:38.363656  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364053  766330 buildroot.go:166] provisioning hostname "ha-053933-m02"
	I1007 12:32:38.364084  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364321  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.367546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368073  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.368107  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368323  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.368535  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368704  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368874  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.369073  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.369311  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.369326  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m02 && echo "ha-053933-m02" | sudo tee /etc/hostname
	I1007 12:32:38.493958  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m02
	
	I1007 12:32:38.493990  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.496774  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497161  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.497193  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497352  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.497571  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497746  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497916  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.498140  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.498312  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.498329  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:32:38.616208  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.616246  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:32:38.616266  766330 buildroot.go:174] setting up certificates
	I1007 12:32:38.616276  766330 provision.go:84] configureAuth start
	I1007 12:32:38.616286  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.616609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:38.619075  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619398  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.619427  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619572  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.621757  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622105  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.622129  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622285  766330 provision.go:143] copyHostCerts
	I1007 12:32:38.622318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622352  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:32:38.622361  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622432  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:32:38.622511  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622529  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:32:38.622535  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622558  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:32:38.622599  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622622  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:32:38.622630  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622663  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:32:38.622733  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m02 san=[127.0.0.1 192.168.39.227 ha-053933-m02 localhost minikube]
	I1007 12:32:38.708452  766330 provision.go:177] copyRemoteCerts
	I1007 12:32:38.708528  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:32:38.708564  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.710962  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711285  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.711318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711472  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.711655  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.711820  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.711918  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:38.799093  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:32:38.799174  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:32:38.827105  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:32:38.827188  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:32:38.854871  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:32:38.854953  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:32:38.882148  766330 provision.go:87] duration metric: took 265.856123ms to configureAuth
	I1007 12:32:38.882180  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:32:38.882387  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:38.882485  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.885151  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885511  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.885545  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885761  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.885978  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886151  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886344  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.886506  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.886695  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.886715  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:32:39.128135  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:32:39.128167  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:32:39.128176  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetURL
	I1007 12:32:39.129618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using libvirt version 6000000
	I1007 12:32:39.132019  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132387  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.132415  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132625  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:32:39.132640  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:32:39.132647  766330 client.go:171] duration metric: took 28.73104158s to LocalClient.Create
	I1007 12:32:39.132672  766330 start.go:167] duration metric: took 28.731111532s to libmachine.API.Create "ha-053933"
	I1007 12:32:39.132682  766330 start.go:293] postStartSetup for "ha-053933-m02" (driver="kvm2")
	I1007 12:32:39.132692  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:32:39.132710  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.132980  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:32:39.133017  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.135744  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136124  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.136167  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136341  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.136530  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.136675  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.136835  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.221605  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:32:39.226484  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:32:39.226514  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:32:39.226584  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:32:39.226655  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:32:39.226665  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:32:39.226746  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:32:39.237427  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:39.261998  766330 start.go:296] duration metric: took 129.301228ms for postStartSetup
	I1007 12:32:39.262093  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:39.262719  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.265384  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.265792  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.265819  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.266155  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:39.266397  766330 start.go:128] duration metric: took 28.884542194s to createHost
	I1007 12:32:39.266428  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.268718  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.268995  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.269035  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.269138  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.269298  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269463  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269575  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.269703  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:39.269878  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:39.269888  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:32:39.379504  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304359.360836408
	
	I1007 12:32:39.379530  766330 fix.go:216] guest clock: 1728304359.360836408
	I1007 12:32:39.379539  766330 fix.go:229] Guest: 2024-10-07 12:32:39.360836408 +0000 UTC Remote: 2024-10-07 12:32:39.26641087 +0000 UTC m=+81.160941412 (delta=94.425538ms)
	I1007 12:32:39.379557  766330 fix.go:200] guest clock delta is within tolerance: 94.425538ms
	I1007 12:32:39.379562  766330 start.go:83] releasing machines lock for "ha-053933-m02", held for 28.997822917s
	I1007 12:32:39.379579  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.379889  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.383410  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.383763  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.383796  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.386874  766330 out.go:177] * Found network options:
	I1007 12:32:39.388989  766330 out.go:177]   - NO_PROXY=192.168.39.152
	W1007 12:32:39.390421  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.390479  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391270  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391484  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391605  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:32:39.391666  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	W1007 12:32:39.391801  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.391871  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:32:39.391887  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.394867  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.394901  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395284  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395339  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395674  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395681  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395918  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.395928  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.396088  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396100  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396238  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.396245  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.642441  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:32:39.649674  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:32:39.649767  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:32:39.666653  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:32:39.666687  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:32:39.666767  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:32:39.684589  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:32:39.700168  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:32:39.700231  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:32:39.716005  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:32:39.731764  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:32:39.862714  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:32:40.011007  766330 docker.go:233] disabling docker service ...
	I1007 12:32:40.011096  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:32:40.027322  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:32:40.041607  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:32:40.187585  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:32:40.331438  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:32:40.347382  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:32:40.367495  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:32:40.367556  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.379748  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:32:40.379840  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.391760  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.403745  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.415505  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:32:40.428366  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.441667  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.460916  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.473748  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:32:40.485573  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:32:40.485645  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:32:40.500703  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:32:40.512028  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:40.646960  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:32:40.739246  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:32:40.739338  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:32:40.744292  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:32:40.744359  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:32:40.748439  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:32:40.790232  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:32:40.790320  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.827829  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.860461  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:32:40.862462  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:32:40.864274  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:40.867846  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868296  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:40.868323  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868742  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:32:40.873673  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:40.887367  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:32:40.887606  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:40.887888  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.887931  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.903464  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1007 12:32:40.903898  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.904410  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.904433  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.904903  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.905134  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:40.906904  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:40.907228  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.907278  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.922960  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I1007 12:32:40.923502  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.924055  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.924078  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.924407  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.924586  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:40.924737  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.227
	I1007 12:32:40.924756  766330 certs.go:194] generating shared ca certs ...
	I1007 12:32:40.924778  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:40.924946  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:32:40.925010  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:32:40.925020  766330 certs.go:256] generating profile certs ...
	I1007 12:32:40.925169  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:32:40.925208  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90
	I1007 12:32:40.925226  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.254]
	I1007 12:32:41.148971  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 ...
	I1007 12:32:41.149006  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90: {Name:mkfc72ac98e5f64b1efa978f83502cc26e6b00b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149188  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 ...
	I1007 12:32:41.149202  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90: {Name:mkb6d827b308c96cc8f5173b1a5723adff201a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149277  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:32:41.149418  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:32:41.149564  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:32:41.149589  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:32:41.149603  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:32:41.149618  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:32:41.149632  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:32:41.149645  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:32:41.149658  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:32:41.149670  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:32:41.149681  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:32:41.149730  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:32:41.149764  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:32:41.149774  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:32:41.149801  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:32:41.149822  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:32:41.149848  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:32:41.149885  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:41.149911  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.149925  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.149937  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.149971  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:41.153293  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153635  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:41.153659  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:41.154192  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:41.154376  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:41.154520  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:41.226577  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:32:41.232730  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:32:41.245060  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:32:41.251197  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:32:41.264593  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:32:41.269517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:32:41.281560  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:32:41.286754  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:32:41.299707  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:32:41.304594  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:32:41.317916  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:32:41.323393  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:32:41.336013  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:32:41.366179  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:32:41.393458  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:32:41.419874  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:32:41.447814  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:32:41.474678  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:32:41.500522  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:32:41.527411  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:32:41.552513  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:32:41.576732  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:32:41.602701  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:32:41.628143  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:32:41.644998  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:32:41.662248  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:32:41.679785  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:32:41.698239  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:32:41.717010  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:32:41.735412  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:32:41.753557  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:32:41.759787  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:32:41.771601  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776332  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776414  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.782579  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:32:41.793992  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:32:41.806293  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811220  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811296  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.817656  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:32:41.829292  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:32:41.840880  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845905  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845988  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.852343  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:32:41.864190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:32:41.868675  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:32:41.868747  766330 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I1007 12:32:41.868844  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:32:41.868868  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:32:41.868905  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:32:41.889715  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:32:41.889813  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:32:41.889876  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.901277  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:32:41.901344  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.911928  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:32:41.911964  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912020  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912066  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:32:41.912079  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:32:41.917061  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:32:41.917099  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:32:42.483195  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.483287  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.490132  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:32:42.490184  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:32:42.569436  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:32:42.620637  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.620740  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.635485  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:32:42.635527  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:32:43.157634  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:32:43.168142  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:32:43.185353  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:32:43.203562  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:32:43.222930  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:32:43.227330  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:43.240979  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:43.377709  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:32:43.396837  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:43.397301  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:43.397366  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:43.414130  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1007 12:32:43.414696  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:43.415312  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:43.415338  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:43.415686  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:43.415901  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:43.416102  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:32:43.416222  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:32:43.416248  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:43.419194  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419695  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:43.419728  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419860  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:43.420045  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:43.420225  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:43.420387  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:43.569631  766330 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:43.569697  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I1007 12:33:05.382098  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (21.812371374s)
	I1007 12:33:05.382136  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:33:05.983459  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m02 minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:33:06.136889  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:33:06.286153  766330 start.go:319] duration metric: took 22.870046293s to joinCluster
	I1007 12:33:06.286246  766330 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:06.286558  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:06.288312  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:33:06.290220  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:06.583421  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:06.686534  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:33:06.686755  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:33:06.686819  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:33:06.687163  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:06.687340  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:06.687357  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:06.687368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:06.687373  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:06.711245  766330 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1007 12:33:07.188212  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.188242  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.188255  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.188274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.191359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:07.688452  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.688484  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.688497  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.688502  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.808189  766330 round_trippers.go:574] Response Status: 200 OK in 119 milliseconds
	I1007 12:33:08.187451  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.187480  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.187491  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.187496  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.191935  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:08.687677  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.687701  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.687711  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.687719  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.690915  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:08.691670  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:09.188237  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.188270  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.188281  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.188289  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.194158  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:09.687515  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.687547  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.687557  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.687562  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.690808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.188360  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.188385  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.188394  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.188400  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.191880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.688056  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.688084  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.688096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.688104  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.691003  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:11.188165  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.188195  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.188206  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.188211  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.191751  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:11.192284  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:11.687697  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.687733  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.687744  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.687751  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.692471  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:12.187925  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.187959  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.187971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.187977  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.191580  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:12.687588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.687620  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.687631  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.687637  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.691690  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:13.187912  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.187949  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.187959  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.187964  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.191046  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.688329  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.688359  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.688370  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.688374  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.692160  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.692713  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:14.188174  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.188198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.188207  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.188210  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.197312  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:33:14.688323  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.688353  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.688364  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.688369  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.692255  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.188273  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.188299  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.188309  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.188312  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.191633  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.688194  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.688221  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.688229  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.688233  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.691201  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:16.188087  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.188118  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.188130  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.188136  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.191654  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:16.192613  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:16.688084  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.688116  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.688127  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.688131  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.691196  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.188046  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.188079  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.188091  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.188099  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.191563  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.687488  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.687515  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.687523  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.687527  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.692225  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:18.187466  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.187496  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.187508  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.187513  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.190916  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.688169  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.688198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.688209  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.688214  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.691684  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.692180  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:19.188410  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.188443  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.188455  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.188461  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.191778  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:19.687861  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.687898  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.687909  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.687918  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.692517  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.187370  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.187394  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.187404  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.187409  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.190680  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.688383  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.688409  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.688418  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.688422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.692411  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.692972  766330 node_ready.go:49] node "ha-053933-m02" has status "Ready":"True"
	I1007 12:33:20.692999  766330 node_ready.go:38] duration metric: took 14.005807631s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:20.693012  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:20.693143  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:20.693154  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.693162  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.693165  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.697361  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.703660  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.703786  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:33:20.703796  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.703803  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.703807  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.707181  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.708043  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.708061  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.708069  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.708074  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.710812  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.711426  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.711448  766330 pod_ready.go:82] duration metric: took 7.751816ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711460  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:33:20.711534  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.711542  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.711545  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.714909  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.715901  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.715918  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.715927  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.715934  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.719555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.720647  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.720668  766330 pod_ready.go:82] duration metric: took 9.201382ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720679  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720751  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:33:20.720759  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.720768  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.720773  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.723495  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.724196  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.724215  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.724226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.724229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.726952  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.727595  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.727616  766330 pod_ready.go:82] duration metric: took 6.930211ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727627  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727692  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:20.727700  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.727714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.727718  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.731049  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.731750  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.731766  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.731786  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.731793  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.734880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.228231  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.228260  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.228274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.228281  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.231667  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.232387  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.232407  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.232416  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.232422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.235588  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.728588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.728616  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.728628  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.728635  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.732106  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.732770  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.732786  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.732795  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.732798  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.735773  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.228683  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:22.228711  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.228720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.228724  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232193  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.232808  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.232825  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.232834  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232839  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.235792  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.236315  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.236338  766330 pod_ready.go:82] duration metric: took 1.508704734s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236354  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236419  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:33:22.236427  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.236434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.236438  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.239818  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.288880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:22.288905  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.288915  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.288920  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.292489  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.293074  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.293096  766330 pod_ready.go:82] duration metric: took 56.735786ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.293107  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.488539  766330 request.go:632] Waited for 195.305457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488616  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488627  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.488640  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.488646  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.492086  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.688457  766330 request.go:632] Waited for 195.312015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688532  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688537  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.688546  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.688550  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.691998  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.692647  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.692670  766330 pod_ready.go:82] duration metric: took 399.55659ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.692683  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.888729  766330 request.go:632] Waited for 195.939419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888840  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888849  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.888862  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.888872  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.892505  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.088565  766330 request.go:632] Waited for 195.365241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088643  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088651  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.088662  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.088670  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.091637  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.092259  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.092277  766330 pod_ready.go:82] duration metric: took 399.588182ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.092289  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.289099  766330 request.go:632] Waited for 196.721146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289204  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289216  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.289227  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.289236  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.292352  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.488835  766330 request.go:632] Waited for 195.58765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488907  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488912  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.488920  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.488925  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.491857  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.492343  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.492364  766330 pod_ready.go:82] duration metric: took 400.067435ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.492375  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.688407  766330 request.go:632] Waited for 195.943093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688521  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688529  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.688538  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.688543  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.692233  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.888501  766330 request.go:632] Waited for 195.323816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888614  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888622  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.888633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.888639  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.892680  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:23.893104  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.893123  766330 pod_ready.go:82] duration metric: took 400.740542ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.893133  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.089301  766330 request.go:632] Waited for 196.068782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089368  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089374  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.089388  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.089395  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.092648  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.288647  766330 request.go:632] Waited for 195.319776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288759  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288778  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.288794  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.288805  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.292348  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.292959  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.292988  766330 pod_ready.go:82] duration metric: took 399.844819ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.293007  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.489072  766330 request.go:632] Waited for 195.96428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489149  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489157  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.489167  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.489175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.492662  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.688896  766330 request.go:632] Waited for 195.439422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689009  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689017  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.689029  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.689035  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.692350  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.692962  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.692988  766330 pod_ready.go:82] duration metric: took 399.970822ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.693003  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.889214  766330 request.go:632] Waited for 196.093786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889300  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889309  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.889322  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.889329  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.892619  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.088740  766330 request.go:632] Waited for 195.405391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088815  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088821  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.088831  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.088837  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.092543  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.093141  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:25.093166  766330 pod_ready.go:82] duration metric: took 400.155132ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:25.093183  766330 pod_ready.go:39] duration metric: took 4.400126454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:25.093213  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:33:25.093283  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:33:25.111694  766330 api_server.go:72] duration metric: took 18.825401123s to wait for apiserver process to appear ...
	I1007 12:33:25.111735  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:33:25.111762  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:33:25.118517  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:33:25.118624  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:33:25.118639  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.118651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.118656  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.119598  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:33:25.119715  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:33:25.119734  766330 api_server.go:131] duration metric: took 7.991573ms to wait for apiserver health ...
	I1007 12:33:25.119743  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:33:25.289166  766330 request.go:632] Waited for 169.340781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289250  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289255  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.289263  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.289268  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.295241  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.299874  766330 system_pods.go:59] 17 kube-system pods found
	I1007 12:33:25.299914  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.299919  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.299923  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.299926  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.299929  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.299933  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.299938  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.299941  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.299944  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.299947  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.299950  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.299953  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.299956  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.299959  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.299962  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.300005  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.300042  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.300050  766330 system_pods.go:74] duration metric: took 180.300279ms to wait for pod list to return data ...
	I1007 12:33:25.300061  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:33:25.489349  766330 request.go:632] Waited for 189.154197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489422  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489429  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.489441  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.489451  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.493783  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.494042  766330 default_sa.go:45] found service account: "default"
	I1007 12:33:25.494060  766330 default_sa.go:55] duration metric: took 193.9912ms for default service account to be created ...
	I1007 12:33:25.494070  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:33:25.688474  766330 request.go:632] Waited for 194.303496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688554  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688560  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.688568  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.688572  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.694194  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.700121  766330 system_pods.go:86] 17 kube-system pods found
	I1007 12:33:25.700159  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.700167  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.700179  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.700185  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.700191  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.700196  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.700202  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.700207  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.700213  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.700218  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.700223  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.700228  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.700233  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.700242  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.700248  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.700255  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.700258  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.700266  766330 system_pods.go:126] duration metric: took 206.189927ms to wait for k8s-apps to be running ...
	I1007 12:33:25.700277  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:33:25.700338  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:25.716873  766330 system_svc.go:56] duration metric: took 16.577644ms WaitForService to wait for kubelet
	I1007 12:33:25.716918  766330 kubeadm.go:582] duration metric: took 19.430632885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:33:25.716946  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:33:25.889445  766330 request.go:632] Waited for 172.381554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889527  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889535  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.889543  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.889547  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.893637  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.894406  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894446  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894466  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894476  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894483  766330 node_conditions.go:105] duration metric: took 177.530833ms to run NodePressure ...
	I1007 12:33:25.894499  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:33:25.894527  766330 start.go:255] writing updated cluster config ...
	I1007 12:33:25.896984  766330 out.go:201] 
	I1007 12:33:25.898622  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:25.898739  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.900470  766330 out.go:177] * Starting "ha-053933-m03" control-plane node in "ha-053933" cluster
	I1007 12:33:25.901744  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:33:25.901777  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:33:25.901887  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:33:25.901898  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:33:25.901996  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.902210  766330 start.go:360] acquireMachinesLock for ha-053933-m03: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:33:25.902261  766330 start.go:364] duration metric: took 29.008µs to acquireMachinesLock for "ha-053933-m03"
	I1007 12:33:25.902279  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:25.902373  766330 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:33:25.903871  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:33:25.903977  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:25.904021  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:25.919504  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36877
	I1007 12:33:25.920002  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:25.920499  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:25.920525  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:25.920897  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:25.921112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:25.921261  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:25.921411  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:33:25.921445  766330 client.go:168] LocalClient.Create starting
	I1007 12:33:25.921486  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:33:25.921530  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921554  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921635  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:33:25.921664  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921680  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921706  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:33:25.921718  766330 main.go:141] libmachine: (ha-053933-m03) Calling .PreCreateCheck
	I1007 12:33:25.921884  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:25.922300  766330 main.go:141] libmachine: Creating machine...
	I1007 12:33:25.922316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .Create
	I1007 12:33:25.922510  766330 main.go:141] libmachine: (ha-053933-m03) Creating KVM machine...
	I1007 12:33:25.923845  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing default KVM network
	I1007 12:33:25.924001  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing private KVM network mk-ha-053933
	I1007 12:33:25.924170  766330 main.go:141] libmachine: (ha-053933-m03) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:25.924210  766330 main.go:141] libmachine: (ha-053933-m03) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:33:25.924298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:25.924182  767113 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:25.924373  766330 main.go:141] libmachine: (ha-053933-m03) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:33:26.206977  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.206809  767113 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa...
	I1007 12:33:26.524415  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524231  767113 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk...
	I1007 12:33:26.524455  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing magic tar header
	I1007 12:33:26.524470  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing SSH key tar header
	I1007 12:33:26.524482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524376  767113 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:26.524496  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03
	I1007 12:33:26.524534  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 (perms=drwx------)
	I1007 12:33:26.524574  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:33:26.524585  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:33:26.524600  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:33:26.524609  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:33:26.524638  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:26.524653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:33:26.524661  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:33:26.524670  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:33:26.524678  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home
	I1007 12:33:26.524693  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Skipping /home - not owner
	I1007 12:33:26.524703  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:33:26.524718  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:33:26.524726  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.525722  766330 main.go:141] libmachine: (ha-053933-m03) define libvirt domain using xml: 
	I1007 12:33:26.525747  766330 main.go:141] libmachine: (ha-053933-m03) <domain type='kvm'>
	I1007 12:33:26.525776  766330 main.go:141] libmachine: (ha-053933-m03)   <name>ha-053933-m03</name>
	I1007 12:33:26.525795  766330 main.go:141] libmachine: (ha-053933-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:33:26.525808  766330 main.go:141] libmachine: (ha-053933-m03)   <vcpu>2</vcpu>
	I1007 12:33:26.525818  766330 main.go:141] libmachine: (ha-053933-m03)   <features>
	I1007 12:33:26.525830  766330 main.go:141] libmachine: (ha-053933-m03)     <acpi/>
	I1007 12:33:26.525838  766330 main.go:141] libmachine: (ha-053933-m03)     <apic/>
	I1007 12:33:26.525850  766330 main.go:141] libmachine: (ha-053933-m03)     <pae/>
	I1007 12:33:26.525859  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.525905  766330 main.go:141] libmachine: (ha-053933-m03)   </features>
	I1007 12:33:26.525934  766330 main.go:141] libmachine: (ha-053933-m03)   <cpu mode='host-passthrough'>
	I1007 12:33:26.525945  766330 main.go:141] libmachine: (ha-053933-m03)   
	I1007 12:33:26.525955  766330 main.go:141] libmachine: (ha-053933-m03)   </cpu>
	I1007 12:33:26.525965  766330 main.go:141] libmachine: (ha-053933-m03)   <os>
	I1007 12:33:26.525971  766330 main.go:141] libmachine: (ha-053933-m03)     <type>hvm</type>
	I1007 12:33:26.525976  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='cdrom'/>
	I1007 12:33:26.525983  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='hd'/>
	I1007 12:33:26.525988  766330 main.go:141] libmachine: (ha-053933-m03)     <bootmenu enable='no'/>
	I1007 12:33:26.525995  766330 main.go:141] libmachine: (ha-053933-m03)   </os>
	I1007 12:33:26.526002  766330 main.go:141] libmachine: (ha-053933-m03)   <devices>
	I1007 12:33:26.526013  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='cdrom'>
	I1007 12:33:26.526054  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/boot2docker.iso'/>
	I1007 12:33:26.526067  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:33:26.526077  766330 main.go:141] libmachine: (ha-053933-m03)       <readonly/>
	I1007 12:33:26.526087  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526099  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='disk'>
	I1007 12:33:26.526109  766330 main.go:141] libmachine: (ha-053933-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:33:26.526124  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk'/>
	I1007 12:33:26.526142  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:33:26.526153  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526162  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526172  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='mk-ha-053933'/>
	I1007 12:33:26.526180  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526189  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526201  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526212  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='default'/>
	I1007 12:33:26.526219  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526233  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526252  766330 main.go:141] libmachine: (ha-053933-m03)     <serial type='pty'>
	I1007 12:33:26.526271  766330 main.go:141] libmachine: (ha-053933-m03)       <target port='0'/>
	I1007 12:33:26.526293  766330 main.go:141] libmachine: (ha-053933-m03)     </serial>
	I1007 12:33:26.526317  766330 main.go:141] libmachine: (ha-053933-m03)     <console type='pty'>
	I1007 12:33:26.526331  766330 main.go:141] libmachine: (ha-053933-m03)       <target type='serial' port='0'/>
	I1007 12:33:26.526341  766330 main.go:141] libmachine: (ha-053933-m03)     </console>
	I1007 12:33:26.526352  766330 main.go:141] libmachine: (ha-053933-m03)     <rng model='virtio'>
	I1007 12:33:26.526364  766330 main.go:141] libmachine: (ha-053933-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:33:26.526375  766330 main.go:141] libmachine: (ha-053933-m03)     </rng>
	I1007 12:33:26.526382  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526387  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526400  766330 main.go:141] libmachine: (ha-053933-m03)   </devices>
	I1007 12:33:26.526412  766330 main.go:141] libmachine: (ha-053933-m03) </domain>
	I1007 12:33:26.526422  766330 main.go:141] libmachine: (ha-053933-m03) 
	I1007 12:33:26.533781  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:c6:4c:5a in network default
	I1007 12:33:26.534377  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring networks are active...
	I1007 12:33:26.534401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.535036  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network default is active
	I1007 12:33:26.535318  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network mk-ha-053933 is active
	I1007 12:33:26.535654  766330 main.go:141] libmachine: (ha-053933-m03) Getting domain xml...
	I1007 12:33:26.536349  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.886582  766330 main.go:141] libmachine: (ha-053933-m03) Waiting to get IP...
	I1007 12:33:26.887435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.887805  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:26.887834  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.887787  767113 retry.go:31] will retry after 278.405187ms: waiting for machine to come up
	I1007 12:33:27.168357  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.168978  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.169005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.168920  767113 retry.go:31] will retry after 329.830323ms: waiting for machine to come up
	I1007 12:33:27.500231  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.500684  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.500728  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.500604  767113 retry.go:31] will retry after 372.653315ms: waiting for machine to come up
	I1007 12:33:27.875190  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.875624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.875654  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.875577  767113 retry.go:31] will retry after 444.943717ms: waiting for machine to come up
	I1007 12:33:28.322485  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.322945  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.322970  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.322909  767113 retry.go:31] will retry after 669.257582ms: waiting for machine to come up
	I1007 12:33:28.994144  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.994697  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.994715  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.994632  767113 retry.go:31] will retry after 733.137025ms: waiting for machine to come up
	I1007 12:33:29.729782  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:29.730264  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:29.730293  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:29.730214  767113 retry.go:31] will retry after 899.738353ms: waiting for machine to come up
	I1007 12:33:30.632328  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:30.632890  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:30.632916  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:30.632809  767113 retry.go:31] will retry after 931.890845ms: waiting for machine to come up
	I1007 12:33:31.566008  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:31.566423  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:31.566453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:31.566382  767113 retry.go:31] will retry after 1.324143868s: waiting for machine to come up
	I1007 12:33:32.892206  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:32.892600  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:32.892624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:32.892560  767113 retry.go:31] will retry after 1.884957277s: waiting for machine to come up
	I1007 12:33:34.779972  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:34.780414  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:34.780482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:34.780403  767113 retry.go:31] will retry after 2.797940617s: waiting for machine to come up
	I1007 12:33:37.580503  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:37.580938  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:37.581017  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:37.580916  767113 retry.go:31] will retry after 3.450180083s: waiting for machine to come up
	I1007 12:33:41.032804  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:41.033196  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:41.033227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:41.033144  767113 retry.go:31] will retry after 3.620491508s: waiting for machine to come up
	I1007 12:33:44.657262  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:44.657724  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:44.657749  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:44.657677  767113 retry.go:31] will retry after 4.652577623s: waiting for machine to come up
	I1007 12:33:49.314220  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314598  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314619  766330 main.go:141] libmachine: (ha-053933-m03) Found IP for machine: 192.168.39.53
	I1007 12:33:49.314644  766330 main.go:141] libmachine: (ha-053933-m03) Reserving static IP address...
	I1007 12:33:49.315014  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "ha-053933-m03", mac: "52:54:00:92:71:bc", ip: "192.168.39.53"} in network mk-ha-053933
	I1007 12:33:49.395618  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:49.395664  766330 main.go:141] libmachine: (ha-053933-m03) Reserved static IP address: 192.168.39.53
	I1007 12:33:49.395679  766330 main.go:141] libmachine: (ha-053933-m03) Waiting for SSH to be available...
	I1007 12:33:49.398571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.398960  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933
	I1007 12:33:49.398990  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:92:71:bc
	I1007 12:33:49.399160  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:49.399184  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:49.399214  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:49.399227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:49.399241  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:49.403005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:33:49.403027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:33:49.403035  766330 main.go:141] libmachine: (ha-053933-m03) DBG | command : exit 0
	I1007 12:33:49.403039  766330 main.go:141] libmachine: (ha-053933-m03) DBG | err     : exit status 255
	I1007 12:33:49.403074  766330 main.go:141] libmachine: (ha-053933-m03) DBG | output  : 
	I1007 12:33:52.403247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:52.406252  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.406668  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.406699  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.407002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:52.407027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:52.407053  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:52.407069  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:52.407109  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:52.534915  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:33:52.535288  766330 main.go:141] libmachine: (ha-053933-m03) KVM machine creation complete!
	I1007 12:33:52.535635  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:52.536389  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536639  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536874  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:33:52.536891  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetState
	I1007 12:33:52.538444  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:33:52.538462  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:33:52.538469  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:33:52.538476  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.541542  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.541939  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.541963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.542112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.542296  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542481  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542677  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.542861  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.543138  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.543151  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:33:52.649741  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:52.649782  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:33:52.649794  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.652589  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.652969  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.653002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.653140  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.653374  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653551  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653673  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.653873  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.654072  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.654084  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:33:52.759715  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:33:52.759834  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:33:52.759854  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:33:52.759868  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760230  766330 buildroot.go:166] provisioning hostname "ha-053933-m03"
	I1007 12:33:52.760268  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760500  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.763370  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.763827  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.763857  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.764033  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.764271  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764477  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764633  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.764776  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.764967  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.764978  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m03 && echo "ha-053933-m03" | sudo tee /etc/hostname
	I1007 12:33:52.887558  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m03
	
	I1007 12:33:52.887587  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.890785  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.891281  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891393  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.891600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.891855  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.892166  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.892433  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.892634  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.892651  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:33:53.009149  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:53.009337  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:33:53.009478  766330 buildroot.go:174] setting up certificates
	I1007 12:33:53.009552  766330 provision.go:84] configureAuth start
	I1007 12:33:53.009602  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:53.009986  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.012616  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.012988  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.013047  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.013159  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.015298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015632  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.015653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015824  766330 provision.go:143] copyHostCerts
	I1007 12:33:53.015867  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.015916  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:33:53.015927  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.016009  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:33:53.016125  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016152  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:33:53.016162  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016198  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:33:53.016272  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016302  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:33:53.016310  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016353  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:33:53.016436  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m03 san=[127.0.0.1 192.168.39.53 ha-053933-m03 localhost minikube]
	I1007 12:33:53.275511  766330 provision.go:177] copyRemoteCerts
	I1007 12:33:53.275578  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:33:53.275609  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.278571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.278958  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.278997  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.279237  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.279470  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.279694  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.279856  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.365609  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:33:53.365705  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:33:53.394108  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:33:53.394203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:33:53.421846  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:33:53.421930  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:33:53.448310  766330 provision.go:87] duration metric: took 438.733854ms to configureAuth
	I1007 12:33:53.448346  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:33:53.448616  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:53.448711  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.451435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.451928  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.451963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.452102  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.452316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452472  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452605  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.452784  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.452957  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.452971  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:33:53.686714  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:33:53.686753  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:33:53.686762  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetURL
	I1007 12:33:53.688034  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using libvirt version 6000000
	I1007 12:33:53.690553  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691049  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.691081  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691275  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:33:53.691309  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:33:53.691317  766330 client.go:171] duration metric: took 27.769860907s to LocalClient.Create
	I1007 12:33:53.691347  766330 start.go:167] duration metric: took 27.76993753s to libmachine.API.Create "ha-053933"
	I1007 12:33:53.691356  766330 start.go:293] postStartSetup for "ha-053933-m03" (driver="kvm2")
	I1007 12:33:53.691366  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:33:53.691384  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.691657  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:33:53.691683  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.693729  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694161  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.694191  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694359  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.694535  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.694715  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.694900  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.777573  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:33:53.782595  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:33:53.782625  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:33:53.782710  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:33:53.782804  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:33:53.782816  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:33:53.782918  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:33:53.793716  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:53.819127  766330 start.go:296] duration metric: took 127.75028ms for postStartSetup
	I1007 12:33:53.819228  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:53.819965  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.822875  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823288  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.823318  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823585  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:53.823804  766330 start.go:128] duration metric: took 27.921419624s to createHost
	I1007 12:33:53.823830  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.826389  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826755  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.826788  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826991  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.827187  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827532  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.827708  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.827909  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.827922  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:33:53.935241  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304433.915881343
	
	I1007 12:33:53.935272  766330 fix.go:216] guest clock: 1728304433.915881343
	I1007 12:33:53.935282  766330 fix.go:229] Guest: 2024-10-07 12:33:53.915881343 +0000 UTC Remote: 2024-10-07 12:33:53.823818192 +0000 UTC m=+155.718348733 (delta=92.063151ms)
	I1007 12:33:53.935303  766330 fix.go:200] guest clock delta is within tolerance: 92.063151ms
	I1007 12:33:53.935309  766330 start.go:83] releasing machines lock for "ha-053933-m03", held for 28.033038751s
	I1007 12:33:53.935340  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.935600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.938944  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.939372  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.939401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.942103  766330 out.go:177] * Found network options:
	I1007 12:33:53.943700  766330 out.go:177]   - NO_PROXY=192.168.39.152,192.168.39.227
	W1007 12:33:53.945305  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.945333  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.945354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946191  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946469  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946569  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:33:53.946621  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	W1007 12:33:53.946704  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.946780  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.946900  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:33:53.946926  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.950981  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951020  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951403  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951437  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951491  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951686  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951876  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951902  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952038  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952066  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952209  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952204  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.952359  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:54.197386  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:33:54.205923  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:33:54.206059  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:33:54.226436  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:33:54.226467  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:33:54.226539  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:33:54.247190  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:33:54.263380  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:33:54.263461  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:33:54.280192  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:33:54.297621  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:33:54.421983  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:33:54.595012  766330 docker.go:233] disabling docker service ...
	I1007 12:33:54.595103  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:33:54.611124  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:33:54.625647  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:33:54.766528  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:33:54.902157  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:33:54.917030  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:33:54.939198  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:33:54.939275  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.951699  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:33:54.951792  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.963943  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.975263  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.986454  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:33:54.998449  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.010053  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.029064  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.040536  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:33:55.051384  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:33:55.051443  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:33:55.065668  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:33:55.076166  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:55.212352  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:33:55.312005  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:33:55.312090  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:33:55.318387  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:33:55.318471  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:33:55.322868  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:33:55.367251  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:33:55.367355  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.397971  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.435128  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:33:55.436490  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:33:55.437841  766330 out.go:177]   - env NO_PROXY=192.168.39.152,192.168.39.227
	I1007 12:33:55.439394  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:55.442218  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442572  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:55.442593  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442854  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:33:55.447427  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:55.460437  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:33:55.460787  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:55.461177  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.461238  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.477083  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1007 12:33:55.477627  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.478242  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.478264  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.478601  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.478770  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:33:55.480358  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:55.480665  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.480703  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.497617  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I1007 12:33:55.498208  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.498771  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.498802  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.499144  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.499349  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:55.499537  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.53
	I1007 12:33:55.499550  766330 certs.go:194] generating shared ca certs ...
	I1007 12:33:55.499567  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.499698  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:33:55.499751  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:33:55.499772  766330 certs.go:256] generating profile certs ...
	I1007 12:33:55.499874  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:33:55.499909  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23
	I1007 12:33:55.499931  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.53 192.168.39.254]
	I1007 12:33:55.566679  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 ...
	I1007 12:33:55.566718  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23: {Name:mk9518d7a648a9de4b8c05fe89f1c3f09f2c6a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.566929  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 ...
	I1007 12:33:55.566948  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23: {Name:mkdcb7e0de901ae74037605940d4a487b0fb8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.567053  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:33:55.567210  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:33:55.567369  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:33:55.567391  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:33:55.567411  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:33:55.567431  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:33:55.567450  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:33:55.567469  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:33:55.567488  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:33:55.567506  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:33:55.586158  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:33:55.586279  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:33:55.586335  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:33:55.586352  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:33:55.586387  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:33:55.586425  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:33:55.586458  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:33:55.586517  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:55.586558  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:33:55.586579  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:55.586598  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:33:55.586646  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:55.589684  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590162  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:55.590193  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590365  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:55.590577  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:55.590763  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:55.590948  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:55.666401  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:33:55.672290  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:33:55.685836  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:33:55.691589  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:33:55.704365  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:33:55.709554  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:33:55.723585  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:33:55.728967  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:33:55.742781  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:33:55.747517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:33:55.759055  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:33:55.763953  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:33:55.775294  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:33:55.802739  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:33:55.829606  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:33:55.854203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:33:55.881501  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:33:55.907802  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:33:55.935368  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:33:55.966709  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:33:55.993237  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:33:56.018616  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:33:56.044579  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:33:56.069120  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:33:56.087293  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:33:56.105801  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:33:56.126196  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:33:56.145822  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:33:56.163980  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:33:56.182187  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:33:56.201073  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:33:56.207142  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:33:56.218685  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.223978  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.224097  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.231835  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:33:56.243660  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:33:56.255269  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260456  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260520  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.267451  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:33:56.279865  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:33:56.291556  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296671  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296755  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.303021  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:33:56.314190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:33:56.319184  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:33:56.319253  766330 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1007 12:33:56.319359  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:33:56.319393  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:33:56.319441  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:33:56.337458  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:33:56.337539  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:33:56.337609  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.352182  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:33:56.352262  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:33:56.364932  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:33:56.365107  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.365108  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:56.364948  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:33:56.365318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.365380  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.386729  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:33:56.386794  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:33:56.386811  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:33:56.386844  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:33:56.386813  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.387110  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.420143  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:33:56.420202  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:33:57.371744  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:33:57.382647  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 12:33:57.402832  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:33:57.421823  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:33:57.441482  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:33:57.445627  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:57.459762  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:57.603405  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:57.624431  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:57.624969  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:57.625051  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:57.641787  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I1007 12:33:57.642353  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:57.642903  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:57.642925  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:57.643307  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:57.643533  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:57.643693  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:33:57.643829  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:33:57.643846  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:57.646962  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647481  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:57.647512  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647651  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:57.647823  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:57.647983  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:57.648106  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:57.973692  766330 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:57.973754  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1007 12:34:20.692568  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.718770843s)
	I1007 12:34:20.692609  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:34:21.235276  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m03 minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:34:21.384823  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:34:21.546452  766330 start.go:319] duration metric: took 23.902751753s to joinCluster
	I1007 12:34:21.546537  766330 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:34:21.547030  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:34:21.548080  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:34:21.549612  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:34:21.823190  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:34:21.845870  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:34:21.846263  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:34:21.846360  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:34:21.846701  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:21.846820  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:21.846832  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:21.846844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:21.846854  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:21.850883  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:22.347874  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.347909  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.347923  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.347929  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.351566  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:22.847344  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.847377  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.866723  766330 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1007 12:34:23.347347  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.347375  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.347387  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.347394  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.351929  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:23.847333  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.847355  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.847363  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.847372  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.850896  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:23.851597  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:24.347594  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.347622  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.347633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.347638  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.351365  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:24.847338  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.847389  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.850525  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.347474  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.347501  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.347512  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.347517  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.350876  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.847008  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.847039  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.847047  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.847052  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.850192  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.347863  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.347891  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.347899  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.347903  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.351555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.352073  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:26.847450  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.847477  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.847485  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.847489  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.851359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.347145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.347169  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.347179  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.347185  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.350867  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.847674  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.847701  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.847710  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.847715  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.851381  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.346976  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.347004  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.347016  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.347020  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.350677  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.847299  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.847324  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.847334  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.847342  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.852124  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:28.852851  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:29.347470  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.347495  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.347506  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.347511  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.351169  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:29.847063  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.847088  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.847096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.847101  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.850541  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:30.347314  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.347341  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.347349  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.347354  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.351677  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:30.847295  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.847322  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.847331  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.847337  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.851021  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.347887  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.347917  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.347928  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.347932  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.351855  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.352449  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:31.847880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.847906  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.847914  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.847918  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.851368  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.347251  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.347285  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.347297  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.347304  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.351028  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.847346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.847371  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.847380  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.847385  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.850808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.347425  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.347452  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.347461  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.347465  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.351213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.847937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.847961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.847976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.847981  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.852995  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:33.853973  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:34.347964  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.347989  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.348006  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.348012  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.351982  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:34.847651  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.847676  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.847685  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.847690  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.851757  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.347354  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.347377  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.347386  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.347390  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.351104  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.847711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.847737  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.847748  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.847753  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.858606  766330 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:34:35.859308  766330 node_ready.go:49] node "ha-053933-m03" has status "Ready":"True"
	I1007 12:34:35.859333  766330 node_ready.go:38] duration metric: took 14.012608332s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:35.859345  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:35.859442  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:35.859456  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.859468  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.859474  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.869218  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:34:35.877082  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.877211  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:34:35.877225  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.877235  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.877246  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.881909  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.883332  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.883357  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.883368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.883378  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.888505  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:35.889562  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.889584  766330 pod_ready.go:82] duration metric: took 12.462204ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889599  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889693  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:34:35.889703  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.889714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.889720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.894158  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.894859  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.894878  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.894888  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.894894  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.898314  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.898768  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.898786  766330 pod_ready.go:82] duration metric: took 9.180577ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898799  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898867  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:34:35.898875  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.898882  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.898885  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.903049  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.903727  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.903743  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.903754  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.903761  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.906490  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.907003  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.907073  766330 pod_ready.go:82] duration metric: took 8.251291ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907112  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907213  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:34:35.907222  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.907230  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.907250  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.910128  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.910735  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:35.910749  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.910760  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.910767  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.914012  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.914767  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.914789  766330 pod_ready.go:82] duration metric: took 7.665567ms for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.914802  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:36.048508  766330 request.go:632] Waited for 133.622997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048575  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048580  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.048588  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.048592  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.052571  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.248730  766330 request.go:632] Waited for 195.373798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248827  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248836  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.248844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.248849  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.251932  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.448570  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.448595  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.448605  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.448610  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.452907  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:36.647847  766330 request.go:632] Waited for 194.331001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647936  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647943  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.647951  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.647956  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.651933  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.915705  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.915729  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.915738  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.915742  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.919213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.048315  766330 request.go:632] Waited for 128.338635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048400  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048408  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.048424  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.048429  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.051185  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:37.415988  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.416012  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.416021  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.416026  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.419983  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.448134  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.448158  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.448168  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.448175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.451453  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.915937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.915961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.915971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.915976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.920167  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:37.921049  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.921073  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.921086  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.921093  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.924604  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.925286  766330 pod_ready.go:93] pod "etcd-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:37.925306  766330 pod_ready.go:82] duration metric: took 2.010496086s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:37.925324  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.048769  766330 request.go:632] Waited for 123.357964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048854  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.048866  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.048882  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.052431  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.248516  766330 request.go:632] Waited for 195.362302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248623  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248634  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.248644  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.248651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.252242  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.252762  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.252784  766330 pod_ready.go:82] duration metric: took 327.452579ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.252797  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.447801  766330 request.go:632] Waited for 194.917273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447884  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447889  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.447897  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.447902  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.451491  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.648627  766330 request.go:632] Waited for 196.37134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648716  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.648722  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.648732  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.652823  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:38.653461  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.653480  766330 pod_ready.go:82] duration metric: took 400.67636ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.653490  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.848685  766330 request.go:632] Waited for 195.113793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848879  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.848893  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.848898  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.853139  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:39.048666  766330 request.go:632] Waited for 194.422198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048757  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048765  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.048773  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.048780  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.052403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.052899  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.052921  766330 pod_ready.go:82] duration metric: took 399.423284ms for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.052935  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.248381  766330 request.go:632] Waited for 195.347943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248463  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248470  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.248479  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.248532  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.252304  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.448654  766330 request.go:632] Waited for 195.421963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448774  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448781  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.448789  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.448794  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.452418  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.452966  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.452987  766330 pod_ready.go:82] duration metric: took 400.045067ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.452997  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.648075  766330 request.go:632] Waited for 195.002627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648177  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648188  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.648196  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.648203  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.651698  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.848035  766330 request.go:632] Waited for 195.367175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848150  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848170  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.848184  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.848192  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.851573  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.852402  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.852421  766330 pod_ready.go:82] duration metric: took 399.417648ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.852432  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.048539  766330 request.go:632] Waited for 196.032961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048627  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048633  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.048641  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.048647  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.052288  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.248694  766330 request.go:632] Waited for 195.442218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248809  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248819  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.248829  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.248839  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.252540  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.253313  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.253337  766330 pod_ready.go:82] duration metric: took 400.899295ms for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.253349  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.448782  766330 request.go:632] Waited for 195.339385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448860  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.448879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.448899  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.452366  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.648273  766330 request.go:632] Waited for 194.918691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648352  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.648361  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.648367  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.651885  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.652427  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.652452  766330 pod_ready.go:82] duration metric: took 399.095883ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.652465  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.848579  766330 request.go:632] Waited for 196.00042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848642  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848648  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.848657  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.848660  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.852403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.048483  766330 request.go:632] Waited for 195.416905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048561  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048566  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.048574  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.048582  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.052281  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.052757  766330 pod_ready.go:93] pod "kube-proxy-dqqj6" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.052775  766330 pod_ready.go:82] duration metric: took 400.298296ms for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.052785  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.247821  766330 request.go:632] Waited for 194.952122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247915  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247920  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.247942  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.247958  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.251753  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.447806  766330 request.go:632] Waited for 195.292745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447871  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447876  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.447883  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.447887  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.451374  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.452013  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.452035  766330 pod_ready.go:82] duration metric: took 399.242268ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.452048  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.648060  766330 request.go:632] Waited for 195.92136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648167  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.648176  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.648181  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.652281  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:41.848221  766330 request.go:632] Waited for 195.408754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848307  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848321  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.848329  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.848332  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.851502  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.852147  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.852173  766330 pod_ready.go:82] duration metric: took 400.115446ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.852186  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.048319  766330 request.go:632] Waited for 196.021861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048415  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048421  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.048429  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.048434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.051904  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.247954  766330 request.go:632] Waited for 195.30672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248042  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248048  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.248056  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.248060  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.251799  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.252357  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.252378  766330 pod_ready.go:82] duration metric: took 400.185892ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.252389  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.448570  766330 request.go:632] Waited for 196.083361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448644  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448649  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.448658  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.448665  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.452279  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.648464  766330 request.go:632] Waited for 195.372097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648558  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648567  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.648575  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.648587  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.651837  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.652442  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.652462  766330 pod_ready.go:82] duration metric: took 400.066938ms for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.652473  766330 pod_ready.go:39] duration metric: took 6.79311586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:42.652490  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:34:42.652549  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:34:42.669655  766330 api_server.go:72] duration metric: took 21.123075945s to wait for apiserver process to appear ...
	I1007 12:34:42.669686  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:34:42.669721  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:34:42.677436  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:34:42.677526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:34:42.677533  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.677545  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.677556  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.678540  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:34:42.678609  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:34:42.678628  766330 api_server.go:131] duration metric: took 8.935272ms to wait for apiserver health ...
	I1007 12:34:42.678643  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:34:42.848087  766330 request.go:632] Waited for 169.34722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848178  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848184  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.848192  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.848197  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.854471  766330 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:34:42.861098  766330 system_pods.go:59] 24 kube-system pods found
	I1007 12:34:42.861133  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:42.861137  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:42.861141  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:42.861145  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:42.861148  766330 system_pods.go:61] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:42.861151  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:42.861154  766330 system_pods.go:61] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:42.861157  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:42.861160  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:42.861163  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:42.861166  766330 system_pods.go:61] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:42.861170  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:42.861177  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:42.861180  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:42.861182  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:42.861185  766330 system_pods.go:61] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:42.861189  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:42.861191  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:42.861194  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:42.861197  766330 system_pods.go:61] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:42.861200  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:42.861203  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:42.861206  766330 system_pods.go:61] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:42.861212  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:42.861221  766330 system_pods.go:74] duration metric: took 182.569158ms to wait for pod list to return data ...
	I1007 12:34:42.861229  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:34:43.048753  766330 request.go:632] Waited for 187.419479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048837  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.048875  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.048879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.053383  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:43.053574  766330 default_sa.go:45] found service account: "default"
	I1007 12:34:43.053596  766330 default_sa.go:55] duration metric: took 192.357019ms for default service account to be created ...
	I1007 12:34:43.053609  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:34:43.248358  766330 request.go:632] Waited for 194.661822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248434  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248457  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.248468  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.248480  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.254368  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:43.261575  766330 system_pods.go:86] 24 kube-system pods found
	I1007 12:34:43.261611  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:43.261617  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:43.261621  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:43.261625  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:43.261628  766330 system_pods.go:89] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:43.261632  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:43.261636  766330 system_pods.go:89] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:43.261641  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:43.261646  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:43.261651  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:43.261656  766330 system_pods.go:89] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:43.261665  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:43.261670  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:43.261679  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:43.261684  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:43.261689  766330 system_pods.go:89] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:43.261704  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:43.261709  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:43.261713  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:43.261719  766330 system_pods.go:89] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:43.261722  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:43.261730  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:43.261736  766330 system_pods.go:89] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:43.261739  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:43.261746  766330 system_pods.go:126] duration metric: took 208.130933ms to wait for k8s-apps to be running ...
	I1007 12:34:43.261758  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:34:43.261819  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:34:43.278366  766330 system_svc.go:56] duration metric: took 16.59381ms WaitForService to wait for kubelet
	I1007 12:34:43.278406  766330 kubeadm.go:582] duration metric: took 21.731835186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:34:43.278428  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:34:43.447722  766330 request.go:632] Waited for 169.191028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447802  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447807  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.447815  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.447822  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.451521  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:43.453111  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453136  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453151  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453154  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453158  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453161  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453165  766330 node_conditions.go:105] duration metric: took 174.732727ms to run NodePressure ...
	I1007 12:34:43.453176  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:34:43.453200  766330 start.go:255] writing updated cluster config ...
	I1007 12:34:43.453638  766330 ssh_runner.go:195] Run: rm -f paused
	I1007 12:34:43.510074  766330 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:34:43.512318  766330 out.go:177] * Done! kubectl is now configured to use "ha-053933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.577594459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a53c7f6f-e6fa-4684-8027-9b94e4fca037 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.579145780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7556a25f-5f20-4b15-8cc1-b9135022fc23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.579624100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304697579597593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7556a25f-5f20-4b15-8cc1-b9135022fc23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.580349159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57898bba-4b93-4d47-af95-3eadb54787ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.580426317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57898bba-4b93-4d47-af95-3eadb54787ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.581139288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57898bba-4b93-4d47-af95-3eadb54787ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.627006231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c18c390e-ddd1-43ee-a79b-7ddf4ce4c6e0 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.627100507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c18c390e-ddd1-43ee-a79b-7ddf4ce4c6e0 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.628426958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=477eb3f8-8af1-4d13-a065-227897f0bbaf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.629068220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304697629037789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=477eb3f8-8af1-4d13-a065-227897f0bbaf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.629840986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5636a86-d5a2-4207-8526-de7773cafb16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.629930671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5636a86-d5a2-4207-8526-de7773cafb16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.630260958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5636a86-d5a2-4207-8526-de7773cafb16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.653236338Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f350af7b-ea34-4fb4-b9c2-5d14ea898371 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.653493806Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-gx88f,Uid:7ee12293-4d71-4418-957b-7685c35307e1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304484839734426,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:34:44.521063855Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sj44v,Uid:268afc07-099f-4bed-bed4-7fdc7c64b948,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728304341326899280,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:32:21.015293208Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ac6bab3d-040f-4b93-9b26-1ce7e373ba68,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304341325922030,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T12:32:21.010878267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tqtzn,Uid:8b161488-236f-456d-9385-0ed32039f1c8,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728304341319483607,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b161488-236f-456d-9385-0ed32039f1c8,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:32:21.002672002Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&PodSandboxMetadata{Name:kindnet-4gmn6,Uid:c532bcb5-a558-4246-87a7-540b2241a92d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304328988627128,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:32:08.681837953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&PodSandboxMetadata{Name:kube-proxy-7bwxp,Uid:5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304328959374472,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:32:08.649343384Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-053933,Uid:4c327992018cf3adef604f8e7c0b6ee6,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1728304317406053456,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{kubernetes.io/config.hash: 4c327992018cf3adef604f8e7c0b6ee6,kubernetes.io/config.seen: 2024-10-07T12:31:56.908644573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-053933,Uid:83382111c0ed3e763a0e292bd03c0bd6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304317403217117,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: 83382111c0ed3e763a0e292bd03c0bd6,kubernetes.io/config.seen: 2024-10-07T12:31:56.908642979Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-053933,Uid:eb4419eb014ffb9581e9f43f41a3509a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304317392981633,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.152:8443,kubernetes.io/config.hash: eb4419eb014ffb9581e9f43f41a3509a,kubernetes.io/config.seen: 2024-10-07T12:31:56.908641887Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90cea5dfb2e910c8cc20093a2832f77447a710284
2374a99ca0b0b82e9b7b05b,Metadata:&PodSandboxMetadata{Name:etcd-ha-053933,Uid:985190db4d35f4cd798aacc03f9ae11b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304317391935879,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.152:2379,kubernetes.io/config.hash: 985190db4d35f4cd798aacc03f9ae11b,kubernetes.io/config.seen: 2024-10-07T12:31:56.908637684Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-053933,Uid:58955b129f3757d64c09a77816310a8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728304317380094330,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58955b129f3757d64c09a77816310a8d,kubernetes.io/config.seen: 2024-10-07T12:31:56.908643831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f350af7b-ea34-4fb4-b9c2-5d14ea898371 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.654177884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31915cbd-c90d-4e82-bc67-00c7cfda30e9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.654294813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31915cbd-c90d-4e82-bc67-00c7cfda30e9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.654709060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31915cbd-c90d-4e82-bc67-00c7cfda30e9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.674760177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b2ce198-c47c-4779-be54-57adeb727308 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.674832131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b2ce198-c47c-4779-be54-57adeb727308 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.675920051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81171358-b707-46f2-a706-f3db15d46949 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.676333269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304697676309316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81171358-b707-46f2-a706-f3db15d46949 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.676946552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5471a022-42e4-49b4-b014-138738e67fbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.676998496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5471a022-42e4-49b4-b014-138738e67fbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:17 ha-053933 crio[664]: time="2024-10-07 12:38:17.677251116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5471a022-42e4-49b4-b014-138738e67fbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ba824fcefba6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e189556a18c92       busybox-7dff88458-gx88f
	2867817e1f480       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   0d58c208fea1c       coredns-7c65d6cfc9-tqtzn
	35044c701c165       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   89c61a059649d       coredns-7c65d6cfc9-sj44v
	3da0371dd7287       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   8d79b5c178f5d       storage-provisioner
	65adc93f12fb7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1546c9281ca68       kindnet-4gmn6
	aea74cdff9eee       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   6bb33ce6417a6       kube-proxy-7bwxp
	e756202203ed3       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   0e8b4b3150e40       kube-vip-ha-053933
	f190ed8ea3a7d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   228ca0c55468f       kube-controller-manager-ha-053933
	096488f001092       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cd767df10cb41       kube-scheduler-ha-053933
	fe11729317aca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   90cea5dfb2e91       etcd-ha-053933
	a23f58b62cf7a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   706ba9f92d690       kube-apiserver-ha-053933
	
	
	==> coredns [2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4] <==
	[INFO] 10.244.1.2:56331 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237909s
	[INFO] 10.244.1.2:36489 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015207s
	[INFO] 10.244.2.2:39298 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129286s
	[INFO] 10.244.2.2:47065 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177192s
	[INFO] 10.244.2.2:34384 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120996s
	[INFO] 10.244.2.2:55346 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176087s
	[INFO] 10.244.0.4:46975 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114471s
	[INFO] 10.244.0.4:58945 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225792s
	[INFO] 10.244.0.4:43259 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067959s
	[INFO] 10.244.0.4:34928 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001509847s
	[INFO] 10.244.0.4:46991 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079782s
	[INFO] 10.244.0.4:59761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084499s
	[INFO] 10.244.1.2:49251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140128s
	[INFO] 10.244.1.2:33825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172303s
	[INFO] 10.244.2.2:58538 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185922s
	[INFO] 10.244.0.4:44359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137041s
	[INFO] 10.244.0.4:58301 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099102s
	[INFO] 10.244.1.2:36803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222211s
	[INFO] 10.244.1.2:41006 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207899s
	[INFO] 10.244.1.2:43041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129649s
	[INFO] 10.244.2.2:45405 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175032s
	[INFO] 10.244.2.2:36952 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143195s
	[INFO] 10.244.0.4:39376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106075s
	[INFO] 10.244.0.4:60091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121535s
	[INFO] 10.244.0.4:37488 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084395s
	
	
	==> coredns [35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5] <==
	[INFO] 10.244.2.2:33316 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000351738s
	[INFO] 10.244.2.2:40861 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001441898s
	[INFO] 10.244.0.4:57140 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000078781s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135026s
	[INFO] 10.244.1.2:54055 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005238284s
	[INFO] 10.244.1.2:56033 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000250432s
	[INFO] 10.244.1.2:35801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184148s
	[INFO] 10.244.1.2:59610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190826s
	[INFO] 10.244.2.2:33184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859772s
	[INFO] 10.244.2.2:46345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160195s
	[INFO] 10.244.2.2:58454 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001735681s
	[INFO] 10.244.2.2:51235 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213117s
	[INFO] 10.244.0.4:40361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002214882s
	[INFO] 10.244.0.4:35596 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091564s
	[INFO] 10.244.1.2:54454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176281s
	[INFO] 10.244.1.2:54571 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089015s
	[INFO] 10.244.2.2:54102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258038s
	[INFO] 10.244.2.2:51160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106978s
	[INFO] 10.244.2.2:57393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167598s
	[INFO] 10.244.0.4:39801 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084483s
	[INFO] 10.244.0.4:60729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097532s
	[INFO] 10.244.1.2:36580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164463s
	[INFO] 10.244.2.2:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036575s
	[INFO] 10.244.2.2:54375 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256014s
	[INFO] 10.244.0.4:46032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082269s
	
	
	==> describe nodes <==
	Name:               ha-053933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-053933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 081ddd3e0f204426846b528e120c10c6
	  System UUID:                081ddd3e-0f20-4426-846b-528e120c10c6
	  Boot ID:                    1dece28a-ef9e-423f-833d-5ccfd814e28e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gx88f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 coredns-7c65d6cfc9-sj44v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-tqtzn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-053933                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m14s
	  kube-system                 kindnet-4gmn6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-053933             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-053933    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-7bwxp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-053933             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-053933                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node ha-053933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node ha-053933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node ha-053933 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-053933 status is now: NodeReady
	  Normal  RegisteredNode           5m7s   node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  RegisteredNode           3m51s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	
	
	Name:               ha-053933-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:33:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:35:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-053933-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea0094a740a940c483867f94cc6c27db
	  System UUID:                ea0094a7-40a9-40c4-8386-7f94cc6c27db
	  Boot ID:                    c270f988-c787-4383-b26b-ec82a3153fd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cll72                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-ha-053933-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m14s
	  kube-system                 kindnet-cx4hw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m16s
	  kube-system                 kube-apiserver-ha-053933-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-ha-053933-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-zvblz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-scheduler-ha-053933-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-vip-ha-053933-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m16s                  cidrAllocator    Node ha-053933-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-053933-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-053933-m02 status is now: NodeNotReady
	
	
	Name:               ha-053933-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-053933-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2c62335e69d4ef7b1309ece17e10873
	  System UUID:                c2c62335-e69d-4ef7-b130-9ece17e10873
	  Boot ID:                    2e17b6e0-0617-4bea-8b9d-8cd903a9fcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvw9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-ha-053933-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m59s
	  kube-system                 kindnet-6tzch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m1s
	  kube-system                 kube-apiserver-ha-053933-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-controller-manager-ha-053933-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-proxy-dqqj6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-scheduler-ha-053933-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-vip-ha-053933-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m55s                kube-proxy       
	  Normal  CIDRAssignmentFailed     4m1s                 cidrAllocator    Node ha-053933-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-053933-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  RegisteredNode           3m51s                node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	
	
	Name:               ha-053933-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_35_18_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-053933-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 114115be4a5e4a82bdbd4b86727c66b7
	  System UUID:                114115be-4a5e-4a82-bdbd-4b86727c66b7
	  Boot ID:                    dba1fc43-1911-4c9b-b57d-d3bef52a7eef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-874mt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-wmjjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-053933-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m                   cidrAllocator    Node ha-053933-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  NodeReady                2m43s                kubelet          Node ha-053933-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050548] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040088] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.846047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.647512] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.009818] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056187] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087371] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186817] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.108690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.296967] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.247594] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.068909] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.901650] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.502104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 12:32] kauditd_printk_skb: 51 callbacks suppressed
	[  +1.286659] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +5.238921] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.342023] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 12:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866] <==
	{"level":"warn","ts":"2024-10-07T12:38:17.996700Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.001036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.005689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.011544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.016309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.022781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.030848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.043965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.045301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.048808Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.055371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.081078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.084340Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.092838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.100711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.104746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.108776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.116132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.125117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.126127Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.131697Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7f77dda0665c949d","rtt":"9.467689ms","error":"dial tcp 192.168.39.227:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-07T12:38:18.131800Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7f77dda0665c949d","rtt":"1.234683ms","error":"dial tcp 192.168.39.227:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-07T12:38:18.134641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.135758Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:18.181036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:38:18 up 6 min,  0 users,  load average: 0.18, 0.17, 0.08
	Linux ha-053933 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c] <==
	I1007 12:37:40.808884       1 main.go:299] handling current node
	I1007 12:37:50.810322       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:37:50.810408       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:37:50.810651       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:37:50.810689       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:37:50.810804       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:37:50.810836       1 main.go:299] handling current node
	I1007 12:37:50.810865       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:37:50.810872       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:00.814625       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:00.814833       1 main.go:299] handling current node
	I1007 12:38:00.814970       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:00.814985       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:00.815723       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:00.815798       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:00.815998       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:00.816057       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:10.808104       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:10.808153       1 main.go:299] handling current node
	I1007 12:38:10.808168       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:10.808173       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:10.808359       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:10.808385       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:10.808430       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:10.808435       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38] <==
	I1007 12:32:02.949969       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1007 12:32:02.963249       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I1007 12:32:02.964729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:32:02.971941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:32:03.069138       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 12:32:03.964342       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 12:32:03.987254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:32:04.095813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:32:08.516111       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1007 12:32:08.611991       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1007 12:34:48.798901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37568: use of closed network connection
	E1007 12:34:49.000124       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37592: use of closed network connection
	E1007 12:34:49.206162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37608: use of closed network connection
	E1007 12:34:49.419763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37632: use of closed network connection
	E1007 12:34:49.618246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37650: use of closed network connection
	E1007 12:34:49.830698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37678: use of closed network connection
	E1007 12:34:50.014306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37698: use of closed network connection
	E1007 12:34:50.203031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37722: use of closed network connection
	E1007 12:34:50.399836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37736: use of closed network connection
	E1007 12:34:50.721906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37754: use of closed network connection
	E1007 12:34:50.916874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37778: use of closed network connection
	E1007 12:34:51.129244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37784: use of closed network connection
	E1007 12:34:51.331880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37804: use of closed network connection
	E1007 12:34:51.534234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37816: use of closed network connection
	E1007 12:34:51.740225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37836: use of closed network connection
	
	
	==> kube-controller-manager [f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255] <==
	E1007 12:35:18.261020       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-053933-m04': failed to patch node CIDR: Node \"ha-053933-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1007 12:35:18.261043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.267395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.419356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.886255       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.927634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m03"
	I1007 12:35:21.910317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.213570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.317164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.867893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.869105       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-053933-m04"
	I1007 12:35:22.944595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:28.233385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.043630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.044602       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:35:36.061944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.755307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:48.386926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:36:37.247180       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:36:37.247992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.283173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.296003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.649837ms"
	I1007 12:36:37.296097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.311µs"
	I1007 12:36:37.968993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:42.526972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	
	
	==> kube-proxy [aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:32:09.744772       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:32:09.779605       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	E1007 12:32:09.779729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:32:09.875780       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:32:09.875870       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:32:09.875896       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:32:09.899096       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:32:09.900043       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:32:09.900063       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:32:09.904977       1 config.go:199] "Starting service config controller"
	I1007 12:32:09.905625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:32:09.905998       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:32:09.906007       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:32:09.909098       1 config.go:328] "Starting node config controller"
	I1007 12:32:09.912651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:32:10.006461       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:32:10.006556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:32:10.013752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525] <==
	W1007 12:32:02.522045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:32:02.522209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:32:02.691725       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:32:02.691861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 12:32:04.967169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 12:35:18.155212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.155405       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 055fbe2f-0b88-4875-9ee5-5672731cf7e9(kube-system/kindnet-tskmj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tskmj"
	E1007 12:35:18.155442       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-tskmj"
	I1007 12:35:18.155464       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.234037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.235784       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17a817ae-69ea-44f0-907d-a935057c340a(kube-system/kube-proxy-hkx4p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hkx4p"
	E1007 12:35:18.235899       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-hkx4p"
	I1007 12:35:18.235923       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.234494       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.237640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fe0255b5-5ad9-4633-a28d-ecdf64a0267c(kube-system/kindnet-gbqh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gbqh5"
	E1007 12:35:18.237709       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-gbqh5"
	I1007 12:35:18.237727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.300436       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300714       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71fc4648-ffa7-4b9c-b3be-35c98da41798(kube-system/kube-proxy-wmjjq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wmjjq"
	E1007 12:35:18.300906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-wmjjq"
	I1007 12:35:18.301040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	E1007 12:35:18.302463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cbe2af3e-e15d-4855-b598-450159e1b100(kube-system/kindnet-874mt) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-874mt"
	E1007 12:35:18.302498       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-874mt"
	I1007 12:35:18.302596       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	
	
	==> kubelet <==
	Oct 07 12:37:04 ha-053933 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:37:04 ha-053933 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:37:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:37:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248076    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248142    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250603    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250995    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252717    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252763    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.255287    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.257649    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.260273    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.261117    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264814    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264871    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.151993    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266021    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266073    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267592    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267615    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-053933 -n ha-053933
helpers_test.go:261: (dbg) Run:  kubectl --context ha-053933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.404319526s)
ha_test.go:415: expected profile "ha-053933" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-053933\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-053933\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-053933\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.152\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.227\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.53\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.244\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,
\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-053933 -n ha-053933
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 logs -n 25: (1.474576535s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m03_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m04 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp testdata/cp-test.txt                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m03 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-053933 node stop m02 -v=7                                                   | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:31:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:31:18.148064  766330 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:31:18.148178  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148182  766330 out.go:358] Setting ErrFile to fd 2...
	I1007 12:31:18.148187  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148357  766330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:31:18.148967  766330 out.go:352] Setting JSON to false
	I1007 12:31:18.149958  766330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8027,"bootTime":1728296251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:31:18.150102  766330 start.go:139] virtualization: kvm guest
	I1007 12:31:18.152485  766330 out.go:177] * [ha-053933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:31:18.154248  766330 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:31:18.154296  766330 notify.go:220] Checking for updates...
	I1007 12:31:18.157253  766330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:31:18.159046  766330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:31:18.160370  766330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.161706  766330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:31:18.163112  766330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:31:18.164841  766330 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:31:18.202110  766330 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:31:18.203531  766330 start.go:297] selected driver: kvm2
	I1007 12:31:18.203547  766330 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:31:18.203562  766330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:31:18.204518  766330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.204603  766330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:31:18.220705  766330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:31:18.220766  766330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:31:18.221021  766330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:31:18.221059  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:18.221106  766330 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:31:18.221116  766330 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:31:18.221169  766330 start.go:340] cluster config:
	{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:18.221279  766330 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.223403  766330 out.go:177] * Starting "ha-053933" primary control-plane node in "ha-053933" cluster
	I1007 12:31:18.224688  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:18.224749  766330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:31:18.224761  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:31:18.224844  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:31:18.224857  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:31:18.225188  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:18.225228  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json: {Name:mk42211822a040c72189a8c96b9ffb20916f09bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:18.225385  766330 start.go:360] acquireMachinesLock for ha-053933: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:31:18.225414  766330 start.go:364] duration metric: took 16.211µs to acquireMachinesLock for "ha-053933"
	I1007 12:31:18.225431  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:31:18.225482  766330 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:31:18.227000  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:31:18.227165  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:18.227217  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:18.241971  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1007 12:31:18.242468  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:18.243060  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:31:18.243086  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:18.243440  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:18.243664  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:18.243802  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:18.243958  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:31:18.243992  766330 client.go:168] LocalClient.Create starting
	I1007 12:31:18.244024  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:31:18.244058  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244073  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244137  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:31:18.244157  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244173  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244190  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:31:18.244198  766330 main.go:141] libmachine: (ha-053933) Calling .PreCreateCheck
	I1007 12:31:18.244526  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:18.244944  766330 main.go:141] libmachine: Creating machine...
	I1007 12:31:18.244959  766330 main.go:141] libmachine: (ha-053933) Calling .Create
	I1007 12:31:18.245125  766330 main.go:141] libmachine: (ha-053933) Creating KVM machine...
	I1007 12:31:18.246330  766330 main.go:141] libmachine: (ha-053933) DBG | found existing default KVM network
	I1007 12:31:18.247162  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.246970  766353 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1007 12:31:18.247250  766330 main.go:141] libmachine: (ha-053933) DBG | created network xml: 
	I1007 12:31:18.247277  766330 main.go:141] libmachine: (ha-053933) DBG | <network>
	I1007 12:31:18.247291  766330 main.go:141] libmachine: (ha-053933) DBG |   <name>mk-ha-053933</name>
	I1007 12:31:18.247307  766330 main.go:141] libmachine: (ha-053933) DBG |   <dns enable='no'/>
	I1007 12:31:18.247318  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247331  766330 main.go:141] libmachine: (ha-053933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:31:18.247341  766330 main.go:141] libmachine: (ha-053933) DBG |     <dhcp>
	I1007 12:31:18.247353  766330 main.go:141] libmachine: (ha-053933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:31:18.247366  766330 main.go:141] libmachine: (ha-053933) DBG |     </dhcp>
	I1007 12:31:18.247382  766330 main.go:141] libmachine: (ha-053933) DBG |   </ip>
	I1007 12:31:18.247394  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247403  766330 main.go:141] libmachine: (ha-053933) DBG | </network>
	I1007 12:31:18.247414  766330 main.go:141] libmachine: (ha-053933) DBG | 
	I1007 12:31:18.252550  766330 main.go:141] libmachine: (ha-053933) DBG | trying to create private KVM network mk-ha-053933 192.168.39.0/24...
	I1007 12:31:18.323012  766330 main.go:141] libmachine: (ha-053933) DBG | private KVM network mk-ha-053933 192.168.39.0/24 created
	I1007 12:31:18.323051  766330 main.go:141] libmachine: (ha-053933) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.323065  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.322988  766353 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.323078  766330 main.go:141] libmachine: (ha-053933) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:31:18.323220  766330 main.go:141] libmachine: (ha-053933) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:31:18.600250  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.600066  766353 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa...
	I1007 12:31:18.865018  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864813  766353 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk...
	I1007 12:31:18.865057  766330 main.go:141] libmachine: (ha-053933) DBG | Writing magic tar header
	I1007 12:31:18.865071  766330 main.go:141] libmachine: (ha-053933) DBG | Writing SSH key tar header
	I1007 12:31:18.865083  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864941  766353 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.865103  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933
	I1007 12:31:18.865116  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 (perms=drwx------)
	I1007 12:31:18.865126  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:31:18.865135  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.865141  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:31:18.865149  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:31:18.865159  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:31:18.865166  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:31:18.865180  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home
	I1007 12:31:18.865192  766330 main.go:141] libmachine: (ha-053933) DBG | Skipping /home - not owner
	I1007 12:31:18.865206  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:31:18.865221  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:31:18.865229  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:31:18.865238  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:31:18.865245  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:18.866439  766330 main.go:141] libmachine: (ha-053933) define libvirt domain using xml: 
	I1007 12:31:18.866466  766330 main.go:141] libmachine: (ha-053933) <domain type='kvm'>
	I1007 12:31:18.866476  766330 main.go:141] libmachine: (ha-053933)   <name>ha-053933</name>
	I1007 12:31:18.866483  766330 main.go:141] libmachine: (ha-053933)   <memory unit='MiB'>2200</memory>
	I1007 12:31:18.866492  766330 main.go:141] libmachine: (ha-053933)   <vcpu>2</vcpu>
	I1007 12:31:18.866503  766330 main.go:141] libmachine: (ha-053933)   <features>
	I1007 12:31:18.866510  766330 main.go:141] libmachine: (ha-053933)     <acpi/>
	I1007 12:31:18.866520  766330 main.go:141] libmachine: (ha-053933)     <apic/>
	I1007 12:31:18.866530  766330 main.go:141] libmachine: (ha-053933)     <pae/>
	I1007 12:31:18.866546  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866569  766330 main.go:141] libmachine: (ha-053933)   </features>
	I1007 12:31:18.866589  766330 main.go:141] libmachine: (ha-053933)   <cpu mode='host-passthrough'>
	I1007 12:31:18.866598  766330 main.go:141] libmachine: (ha-053933)   
	I1007 12:31:18.866607  766330 main.go:141] libmachine: (ha-053933)   </cpu>
	I1007 12:31:18.866617  766330 main.go:141] libmachine: (ha-053933)   <os>
	I1007 12:31:18.866624  766330 main.go:141] libmachine: (ha-053933)     <type>hvm</type>
	I1007 12:31:18.866630  766330 main.go:141] libmachine: (ha-053933)     <boot dev='cdrom'/>
	I1007 12:31:18.866636  766330 main.go:141] libmachine: (ha-053933)     <boot dev='hd'/>
	I1007 12:31:18.866641  766330 main.go:141] libmachine: (ha-053933)     <bootmenu enable='no'/>
	I1007 12:31:18.866647  766330 main.go:141] libmachine: (ha-053933)   </os>
	I1007 12:31:18.866652  766330 main.go:141] libmachine: (ha-053933)   <devices>
	I1007 12:31:18.866659  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='cdrom'>
	I1007 12:31:18.866666  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/boot2docker.iso'/>
	I1007 12:31:18.866673  766330 main.go:141] libmachine: (ha-053933)       <target dev='hdc' bus='scsi'/>
	I1007 12:31:18.866678  766330 main.go:141] libmachine: (ha-053933)       <readonly/>
	I1007 12:31:18.866683  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866691  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='disk'>
	I1007 12:31:18.866702  766330 main.go:141] libmachine: (ha-053933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:31:18.866711  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk'/>
	I1007 12:31:18.866722  766330 main.go:141] libmachine: (ha-053933)       <target dev='hda' bus='virtio'/>
	I1007 12:31:18.866731  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866737  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866745  766330 main.go:141] libmachine: (ha-053933)       <source network='mk-ha-053933'/>
	I1007 12:31:18.866749  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866755  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866759  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866768  766330 main.go:141] libmachine: (ha-053933)       <source network='default'/>
	I1007 12:31:18.866775  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866780  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866786  766330 main.go:141] libmachine: (ha-053933)     <serial type='pty'>
	I1007 12:31:18.866791  766330 main.go:141] libmachine: (ha-053933)       <target port='0'/>
	I1007 12:31:18.866798  766330 main.go:141] libmachine: (ha-053933)     </serial>
	I1007 12:31:18.866802  766330 main.go:141] libmachine: (ha-053933)     <console type='pty'>
	I1007 12:31:18.866810  766330 main.go:141] libmachine: (ha-053933)       <target type='serial' port='0'/>
	I1007 12:31:18.866821  766330 main.go:141] libmachine: (ha-053933)     </console>
	I1007 12:31:18.866827  766330 main.go:141] libmachine: (ha-053933)     <rng model='virtio'>
	I1007 12:31:18.866834  766330 main.go:141] libmachine: (ha-053933)       <backend model='random'>/dev/random</backend>
	I1007 12:31:18.866840  766330 main.go:141] libmachine: (ha-053933)     </rng>
	I1007 12:31:18.866844  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866850  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866855  766330 main.go:141] libmachine: (ha-053933)   </devices>
	I1007 12:31:18.866860  766330 main.go:141] libmachine: (ha-053933) </domain>
	I1007 12:31:18.866868  766330 main.go:141] libmachine: (ha-053933) 
	I1007 12:31:18.871598  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:91:b8:36 in network default
	I1007 12:31:18.872268  766330 main.go:141] libmachine: (ha-053933) Ensuring networks are active...
	I1007 12:31:18.872288  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:18.873069  766330 main.go:141] libmachine: (ha-053933) Ensuring network default is active
	I1007 12:31:18.873363  766330 main.go:141] libmachine: (ha-053933) Ensuring network mk-ha-053933 is active
	I1007 12:31:18.873853  766330 main.go:141] libmachine: (ha-053933) Getting domain xml...
	I1007 12:31:18.874562  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:19.211616  766330 main.go:141] libmachine: (ha-053933) Waiting to get IP...
	I1007 12:31:19.212423  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.212778  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.212812  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.212764  766353 retry.go:31] will retry after 226.747121ms: waiting for machine to come up
	I1007 12:31:19.441331  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.441786  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.441837  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.441730  766353 retry.go:31] will retry after 274.527206ms: waiting for machine to come up
	I1007 12:31:19.718508  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.719027  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.719064  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.718969  766353 retry.go:31] will retry after 356.880394ms: waiting for machine to come up
	I1007 12:31:20.077626  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.078112  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.078145  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.078091  766353 retry.go:31] will retry after 415.686035ms: waiting for machine to come up
	I1007 12:31:20.495868  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.496297  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.496328  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.496232  766353 retry.go:31] will retry after 565.036299ms: waiting for machine to come up
	I1007 12:31:21.062533  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.063181  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.063212  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.063112  766353 retry.go:31] will retry after 934.304139ms: waiting for machine to come up
	I1007 12:31:21.999277  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.999729  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.999763  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.999684  766353 retry.go:31] will retry after 862.178533ms: waiting for machine to come up
	I1007 12:31:22.863123  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:22.863626  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:22.863658  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:22.863574  766353 retry.go:31] will retry after 1.201609733s: waiting for machine to come up
	I1007 12:31:24.066671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:24.067072  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:24.067104  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:24.067015  766353 retry.go:31] will retry after 1.419758916s: waiting for machine to come up
	I1007 12:31:25.488770  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:25.489216  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:25.489240  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:25.489182  766353 retry.go:31] will retry after 2.248635623s: waiting for machine to come up
	I1007 12:31:27.740776  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:27.741277  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:27.741301  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:27.741240  766353 retry.go:31] will retry after 1.919055927s: waiting for machine to come up
	I1007 12:31:29.662363  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:29.662857  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:29.663141  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:29.662878  766353 retry.go:31] will retry after 3.284332028s: waiting for machine to come up
	I1007 12:31:32.951614  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:32.952006  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:32.952134  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:32.951952  766353 retry.go:31] will retry after 3.413281695s: waiting for machine to come up
	I1007 12:31:36.369285  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:36.369674  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:36.369704  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:36.369624  766353 retry.go:31] will retry after 5.240968669s: waiting for machine to come up
	I1007 12:31:41.615028  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615539  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has current primary IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615555  766330 main.go:141] libmachine: (ha-053933) Found IP for machine: 192.168.39.152
	I1007 12:31:41.615563  766330 main.go:141] libmachine: (ha-053933) Reserving static IP address...
	I1007 12:31:41.615914  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "ha-053933", mac: "52:54:00:7e:91:1b", ip: "192.168.39.152"} in network mk-ha-053933
	I1007 12:31:41.698423  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:41.698453  766330 main.go:141] libmachine: (ha-053933) Reserved static IP address: 192.168.39.152
	I1007 12:31:41.698466  766330 main.go:141] libmachine: (ha-053933) Waiting for SSH to be available...
	I1007 12:31:41.701233  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.701575  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933
	I1007 12:31:41.701604  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:7e:91:1b
	I1007 12:31:41.701733  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:41.701762  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:41.701811  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:41.701844  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:41.701865  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:41.705812  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:31:41.705841  766330 main.go:141] libmachine: (ha-053933) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:31:41.705848  766330 main.go:141] libmachine: (ha-053933) DBG | command : exit 0
	I1007 12:31:41.705853  766330 main.go:141] libmachine: (ha-053933) DBG | err     : exit status 255
	I1007 12:31:41.705861  766330 main.go:141] libmachine: (ha-053933) DBG | output  : 
	I1007 12:31:44.706593  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:44.709072  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709617  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.709649  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709785  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:44.709814  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:44.709843  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:44.709856  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:44.709871  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:44.834399  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: <nil>: 
	I1007 12:31:44.834682  766330 main.go:141] libmachine: (ha-053933) KVM machine creation complete!
	I1007 12:31:44.834978  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:44.835619  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.835838  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.836043  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:31:44.836062  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:31:44.837184  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:31:44.837198  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:31:44.837203  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:31:44.837209  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.839398  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839807  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.839830  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839939  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.840108  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840281  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840429  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.840654  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.840918  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.840931  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:31:44.945582  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:44.945632  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:31:44.945644  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.948258  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948719  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.948754  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948921  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.949136  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949341  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949504  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.949690  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.949946  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.949963  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:31:45.055227  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:31:45.055350  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:31:45.055364  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:31:45.055378  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055638  766330 buildroot.go:166] provisioning hostname "ha-053933"
	I1007 12:31:45.055680  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055865  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.058671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059121  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.059156  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059299  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.059582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059753  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059896  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.060046  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.060230  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.060242  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933 && echo "ha-053933" | sudo tee /etc/hostname
	I1007 12:31:45.177180  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:31:45.177214  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.180205  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180610  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.180640  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.181104  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181275  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181434  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.181657  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.181837  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.181854  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:31:45.296167  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:45.296213  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:31:45.296262  766330 buildroot.go:174] setting up certificates
	I1007 12:31:45.296275  766330 provision.go:84] configureAuth start
	I1007 12:31:45.296287  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.296598  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.299370  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299721  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.299769  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.302528  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.302981  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.303013  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.303173  766330 provision.go:143] copyHostCerts
	I1007 12:31:45.303222  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303263  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:31:45.303285  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303361  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:31:45.303500  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303523  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:31:45.303528  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303559  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:31:45.303616  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303633  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:31:45.303637  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303657  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:31:45.303708  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933 san=[127.0.0.1 192.168.39.152 ha-053933 localhost minikube]
	I1007 12:31:45.422772  766330 provision.go:177] copyRemoteCerts
	I1007 12:31:45.422847  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:31:45.422884  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.426109  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426432  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.426461  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426620  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.426796  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.426987  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.427121  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.508256  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:31:45.508354  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:31:45.535023  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:31:45.535097  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:31:45.561047  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:31:45.561146  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:31:45.586470  766330 provision.go:87] duration metric: took 290.178076ms to configureAuth
	I1007 12:31:45.586509  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:31:45.586752  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:45.586838  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.589503  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.589873  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.589917  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.590215  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.590402  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590554  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590703  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.590899  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.591142  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.591160  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:31:45.816081  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:31:45.816125  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:31:45.816137  766330 main.go:141] libmachine: (ha-053933) Calling .GetURL
	I1007 12:31:45.817540  766330 main.go:141] libmachine: (ha-053933) DBG | Using libvirt version 6000000
	I1007 12:31:45.820289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820694  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.820725  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820851  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:31:45.820871  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:31:45.820882  766330 client.go:171] duration metric: took 27.576881663s to LocalClient.Create
	I1007 12:31:45.820914  766330 start.go:167] duration metric: took 27.57695761s to libmachine.API.Create "ha-053933"
	I1007 12:31:45.820939  766330 start.go:293] postStartSetup for "ha-053933" (driver="kvm2")
	I1007 12:31:45.820955  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:31:45.820986  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:45.821218  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:31:45.821261  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.823471  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.823791  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.823834  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.824015  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.824234  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.824403  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.824535  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.905405  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:31:45.910330  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:31:45.910363  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:31:45.910424  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:31:45.910498  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:31:45.910509  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:31:45.910617  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:31:45.921262  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:45.947335  766330 start.go:296] duration metric: took 126.377039ms for postStartSetup
	I1007 12:31:45.947395  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:45.948057  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.950566  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.950901  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.950931  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.951158  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:45.951337  766330 start.go:128] duration metric: took 27.725842508s to createHost
	I1007 12:31:45.951369  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.953682  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954057  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.954084  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954210  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.954414  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954585  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954727  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.954891  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.955077  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.955089  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:31:46.059048  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304306.039624942
	
	I1007 12:31:46.059075  766330 fix.go:216] guest clock: 1728304306.039624942
	I1007 12:31:46.059083  766330 fix.go:229] Guest: 2024-10-07 12:31:46.039624942 +0000 UTC Remote: 2024-10-07 12:31:45.951349706 +0000 UTC m=+27.845880248 (delta=88.275236ms)
	I1007 12:31:46.059106  766330 fix.go:200] guest clock delta is within tolerance: 88.275236ms
	I1007 12:31:46.059111  766330 start.go:83] releasing machines lock for "ha-053933", held for 27.833688154s
	I1007 12:31:46.059131  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.059394  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:46.062064  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062406  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.062431  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062578  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063106  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063318  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063436  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:31:46.063484  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.063563  766330 ssh_runner.go:195] Run: cat /version.json
	I1007 12:31:46.063582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.066118  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066393  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066431  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066454  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066641  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066729  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066762  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066811  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.066931  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066955  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067124  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.067115  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.067267  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067400  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.143506  766330 ssh_runner.go:195] Run: systemctl --version
	I1007 12:31:46.170858  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:31:46.332209  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:31:46.338580  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:31:46.338677  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:31:46.356826  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:31:46.356863  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:31:46.356954  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:31:46.374524  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:31:46.390007  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:31:46.390089  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:31:46.404935  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:31:46.420186  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:31:46.537561  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:31:46.724537  766330 docker.go:233] disabling docker service ...
	I1007 12:31:46.724631  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:31:46.740520  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:31:46.754710  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:31:46.868070  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:31:46.983211  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:31:46.998357  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:31:47.018646  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:31:47.018734  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.030677  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:31:47.030766  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.042531  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.053856  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.065763  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:31:47.077170  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.088459  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.106901  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.118161  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:31:47.128388  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:31:47.128462  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:31:47.142126  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:31:47.154515  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:47.283963  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:31:47.385321  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:31:47.385405  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:31:47.390485  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:31:47.390552  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:31:47.394825  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:31:47.439074  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:31:47.439187  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.469132  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.501636  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:31:47.503367  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:47.506449  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.506817  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:47.506859  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.507082  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:31:47.511597  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:47.525698  766330 kubeadm.go:883] updating cluster {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:31:47.525829  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:47.525874  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:47.561011  766330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:31:47.561094  766330 ssh_runner.go:195] Run: which lz4
	I1007 12:31:47.565196  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:31:47.565316  766330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:31:47.569571  766330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:31:47.569613  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:31:49.022834  766330 crio.go:462] duration metric: took 1.457534476s to copy over tarball
	I1007 12:31:49.022945  766330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:31:51.131868  766330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108889496s)
	I1007 12:31:51.131914  766330 crio.go:469] duration metric: took 2.109034387s to extract the tarball
	I1007 12:31:51.131926  766330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:31:51.169816  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:51.217403  766330 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:31:51.217431  766330 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:31:51.217440  766330 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.31.1 crio true true} ...
	I1007 12:31:51.217556  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:31:51.217655  766330 ssh_runner.go:195] Run: crio config
	I1007 12:31:51.271379  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:51.271408  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:31:51.271420  766330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:31:51.271445  766330 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-053933 NodeName:ha-053933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:31:51.271623  766330 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-053933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:31:51.271654  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:31:51.271699  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:31:51.289463  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:31:51.289607  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:31:51.289677  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:31:51.300325  766330 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:31:51.300403  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:31:51.311044  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:31:51.329552  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:31:51.347746  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:31:51.366188  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:31:51.384590  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:31:51.388865  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:51.402571  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:51.531092  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:31:51.550538  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.152
	I1007 12:31:51.550568  766330 certs.go:194] generating shared ca certs ...
	I1007 12:31:51.550589  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.550791  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:31:51.550844  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:31:51.550855  766330 certs.go:256] generating profile certs ...
	I1007 12:31:51.550949  766330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:31:51.550971  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt with IP's: []
	I1007 12:31:51.873489  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt ...
	I1007 12:31:51.873532  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt: {Name:mkf7b8a7f4d9827c14fd0fbc8bb02e2f79d65528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873758  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key ...
	I1007 12:31:51.873776  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key: {Name:mk6b5a827040be723c18ebdcd9fe7d1599565bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873894  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a
	I1007 12:31:51.873912  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.254]
	I1007 12:31:52.061549  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a ...
	I1007 12:31:52.061587  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a: {Name:mk1a012d659f1c8c4afc92ca485eba408eb37a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061787  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a ...
	I1007 12:31:52.061804  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a: {Name:mkb1195bd1ddd6ea78076dea0e840887aeae92ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061908  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:31:52.062012  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:31:52.062107  766330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:31:52.062125  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt with IP's: []
	I1007 12:31:52.119663  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt ...
	I1007 12:31:52.119698  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt: {Name:mkf6d674dcac47b878e8df13383f77bcf932d249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.119900  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key ...
	I1007 12:31:52.119913  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key: {Name:mk301510b9dc1296a9e7f127da3f0d4b86905808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.120033  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:31:52.120053  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:31:52.120064  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:31:52.120077  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:31:52.120087  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:31:52.120118  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:31:52.120142  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:31:52.120155  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:31:52.120209  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:31:52.120251  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:31:52.120261  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:31:52.120290  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:31:52.120312  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:31:52.120339  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:31:52.120379  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:52.120408  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.120422  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.120434  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.121128  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:31:52.149003  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:31:52.175017  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:31:52.201648  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:31:52.228352  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:31:52.255290  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:31:52.282215  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:31:52.309286  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:31:52.337694  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:31:52.366883  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:31:52.402754  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:31:52.430306  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:31:52.451397  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:31:52.458450  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:31:52.470676  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476879  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476941  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.483560  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:31:52.495531  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:31:52.507273  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512685  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512760  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.519035  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:31:52.530701  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:31:52.542163  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547093  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547169  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.553420  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:31:52.565081  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:31:52.569549  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:31:52.569630  766330 kubeadm.go:392] StartCluster: {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:52.569737  766330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:31:52.569800  766330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:31:52.613192  766330 cri.go:89] found id: ""
	I1007 12:31:52.613311  766330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:31:52.625713  766330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:31:52.636220  766330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:31:52.646590  766330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:31:52.646626  766330 kubeadm.go:157] found existing configuration files:
	
	I1007 12:31:52.646686  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:31:52.656870  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:31:52.656944  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:31:52.667467  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:31:52.677109  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:31:52.677186  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:31:52.687168  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.696969  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:31:52.697035  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.706604  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:31:52.716252  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:31:52.716325  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:31:52.726572  766330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:31:52.847487  766330 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:31:52.847581  766330 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:31:52.955260  766330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:31:52.955420  766330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:31:52.955545  766330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:31:52.964537  766330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:31:53.051755  766330 out.go:235]   - Generating certificates and keys ...
	I1007 12:31:53.051938  766330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:31:53.052035  766330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:31:53.320791  766330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:31:53.468201  766330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:31:53.842801  766330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:31:53.969642  766330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:31:54.101242  766330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:31:54.101440  766330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.456134  766330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:31:54.456354  766330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.521797  766330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:31:54.769778  766330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:31:55.125227  766330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:31:55.125448  766330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:31:55.361551  766330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:31:55.783698  766330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:31:56.057409  766330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:31:56.211507  766330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:31:56.348279  766330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:31:56.349002  766330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:31:56.353525  766330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:31:56.355620  766330 out.go:235]   - Booting up control plane ...
	I1007 12:31:56.355760  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:31:56.356147  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:31:56.356974  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:31:56.373175  766330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:31:56.381538  766330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:31:56.381594  766330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:31:56.521323  766330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:31:56.521511  766330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:31:57.022943  766330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.739695ms
	I1007 12:31:57.023054  766330 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:32:03.058810  766330 kubeadm.go:310] [api-check] The API server is healthy after 6.037121779s
	I1007 12:32:03.072819  766330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:32:03.101026  766330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:32:03.645977  766330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:32:03.646231  766330 kubeadm.go:310] [mark-control-plane] Marking the node ha-053933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:32:03.661217  766330 kubeadm.go:310] [bootstrap-token] Using token: ofkgus.681l1bfefmhh1xkb
	I1007 12:32:03.662957  766330 out.go:235]   - Configuring RBAC rules ...
	I1007 12:32:03.663116  766330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:32:03.674911  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:32:03.697863  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:32:03.703512  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:32:03.708092  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:32:03.713563  766330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:32:03.734636  766330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:32:03.997011  766330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:32:04.464216  766330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:32:04.465131  766330 kubeadm.go:310] 
	I1007 12:32:04.465191  766330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:32:04.465199  766330 kubeadm.go:310] 
	I1007 12:32:04.465336  766330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:32:04.465360  766330 kubeadm.go:310] 
	I1007 12:32:04.465394  766330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:32:04.465446  766330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:32:04.465491  766330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:32:04.465504  766330 kubeadm.go:310] 
	I1007 12:32:04.465572  766330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:32:04.465599  766330 kubeadm.go:310] 
	I1007 12:32:04.465644  766330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:32:04.465663  766330 kubeadm.go:310] 
	I1007 12:32:04.465719  766330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:32:04.465794  766330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:32:04.465885  766330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:32:04.465901  766330 kubeadm.go:310] 
	I1007 12:32:04.466075  766330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:32:04.466193  766330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:32:04.466201  766330 kubeadm.go:310] 
	I1007 12:32:04.466294  766330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466394  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 12:32:04.466415  766330 kubeadm.go:310] 	--control-plane 
	I1007 12:32:04.466421  766330 kubeadm.go:310] 
	I1007 12:32:04.466490  766330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:32:04.466497  766330 kubeadm.go:310] 
	I1007 12:32:04.466565  766330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466661  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 12:32:04.467760  766330 kubeadm.go:310] W1007 12:31:52.830915     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468039  766330 kubeadm.go:310] W1007 12:31:52.831996     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468166  766330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:32:04.468194  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:32:04.468205  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:32:04.470298  766330 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:32:04.471574  766330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:32:04.477802  766330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:32:04.477826  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:32:04.497072  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:32:04.906135  766330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:32:04.906201  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:04.906237  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933 minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=true
	I1007 12:32:05.063682  766330 ops.go:34] apiserver oom_adj: -16
	I1007 12:32:05.063698  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:05.564187  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.063920  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.563953  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.064483  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.564765  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.064739  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.564036  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.063899  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.198443  766330 kubeadm.go:1113] duration metric: took 4.292302963s to wait for elevateKubeSystemPrivileges
	I1007 12:32:09.198484  766330 kubeadm.go:394] duration metric: took 16.62887336s to StartCluster
	I1007 12:32:09.198511  766330 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.198603  766330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.199399  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.199661  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:32:09.199654  766330 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:09.199683  766330 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:32:09.199750  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:32:09.199769  766330 addons.go:69] Setting storage-provisioner=true in profile "ha-053933"
	I1007 12:32:09.199790  766330 addons.go:234] Setting addon storage-provisioner=true in "ha-053933"
	I1007 12:32:09.199789  766330 addons.go:69] Setting default-storageclass=true in profile "ha-053933"
	I1007 12:32:09.199827  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.199861  766330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-053933"
	I1007 12:32:09.199924  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:09.200250  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200297  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.200379  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200403  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.217502  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I1007 12:32:09.217554  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I1007 12:32:09.217985  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218145  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218593  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218622  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.218725  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218753  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.219006  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219124  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219326  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.219637  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.219691  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.221998  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.222368  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:32:09.223019  766330 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:32:09.223381  766330 addons.go:234] Setting addon default-storageclass=true in "ha-053933"
	I1007 12:32:09.223435  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.223846  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.223902  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.237604  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I1007 12:32:09.238161  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.238820  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.238847  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.239267  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.239621  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.242388  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.242754  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1007 12:32:09.243274  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.243977  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.244007  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.244396  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.244986  766330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:32:09.245068  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.245147  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.246976  766330 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.247004  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:32:09.247031  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.251289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.251823  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.251851  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.252064  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.252294  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.252448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.252580  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.263439  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1007 12:32:09.263833  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.264713  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.264733  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.265269  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.265519  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.267198  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.267411  766330 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:09.267431  766330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:32:09.267448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.271160  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.271638  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.271652  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.272078  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.272247  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.272388  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.272476  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.422833  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:32:09.443940  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.510999  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:10.102670  766330 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:32:10.350678  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350704  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.350784  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350815  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351026  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351046  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351056  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351063  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351128  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.351191  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351222  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351239  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351246  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.352633  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352653  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352669  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.352691  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352714  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352813  766330 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:32:10.352834  766330 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:32:10.352951  766330 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:32:10.352963  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.352974  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.352984  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.364518  766330 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:32:10.365197  766330 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:32:10.365213  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.365222  766330 round_trippers.go:473]     Content-Type: application/json
	I1007 12:32:10.365226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.365229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.368346  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:32:10.368537  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.368555  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.368875  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.368889  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.368895  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.371604  766330 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:32:10.373030  766330 addons.go:510] duration metric: took 1.173351959s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:32:10.373068  766330 start.go:246] waiting for cluster config update ...
	I1007 12:32:10.373085  766330 start.go:255] writing updated cluster config ...
	I1007 12:32:10.375098  766330 out.go:201] 
	I1007 12:32:10.377249  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:10.377439  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.379490  766330 out.go:177] * Starting "ha-053933-m02" control-plane node in "ha-053933" cluster
	I1007 12:32:10.381087  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:32:10.381130  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:32:10.381324  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:32:10.381339  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:32:10.381436  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.381664  766330 start.go:360] acquireMachinesLock for ha-053933-m02: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:32:10.381718  766330 start.go:364] duration metric: took 27.543µs to acquireMachinesLock for "ha-053933-m02"
	I1007 12:32:10.381752  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:10.381840  766330 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:32:10.383550  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:32:10.383680  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:10.383748  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:10.399329  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1007 12:32:10.399900  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:10.400460  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:10.400489  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:10.400855  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:10.401087  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:10.401325  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:10.401564  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:32:10.401597  766330 client.go:168] LocalClient.Create starting
	I1007 12:32:10.401634  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:32:10.401683  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401708  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401774  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:32:10.401806  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401824  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401883  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:32:10.401911  766330 main.go:141] libmachine: (ha-053933-m02) Calling .PreCreateCheck
	I1007 12:32:10.402163  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:10.402584  766330 main.go:141] libmachine: Creating machine...
	I1007 12:32:10.402602  766330 main.go:141] libmachine: (ha-053933-m02) Calling .Create
	I1007 12:32:10.402815  766330 main.go:141] libmachine: (ha-053933-m02) Creating KVM machine...
	I1007 12:32:10.404630  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing default KVM network
	I1007 12:32:10.404848  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing private KVM network mk-ha-053933
	I1007 12:32:10.405187  766330 main.go:141] libmachine: (ha-053933-m02) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.405209  766330 main.go:141] libmachine: (ha-053933-m02) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:32:10.405302  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.405168  766716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.405466  766330 main.go:141] libmachine: (ha-053933-m02) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:32:10.686269  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.686123  766716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa...
	I1007 12:32:10.953304  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953079  766716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk...
	I1007 12:32:10.953335  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing magic tar header
	I1007 12:32:10.953347  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing SSH key tar header
	I1007 12:32:10.953354  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953302  766716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.953491  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02
	I1007 12:32:10.953520  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 (perms=drwx------)
	I1007 12:32:10.953532  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:32:10.953546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.953559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:32:10.953567  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:32:10.953577  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:32:10.953583  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:32:10.953594  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:32:10.953602  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:32:10.953610  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:32:10.953626  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:10.953639  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:32:10.953649  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home
	I1007 12:32:10.953661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Skipping /home - not owner
	I1007 12:32:10.954892  766330 main.go:141] libmachine: (ha-053933-m02) define libvirt domain using xml: 
	I1007 12:32:10.954919  766330 main.go:141] libmachine: (ha-053933-m02) <domain type='kvm'>
	I1007 12:32:10.954926  766330 main.go:141] libmachine: (ha-053933-m02)   <name>ha-053933-m02</name>
	I1007 12:32:10.954934  766330 main.go:141] libmachine: (ha-053933-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:32:10.954971  766330 main.go:141] libmachine: (ha-053933-m02)   <vcpu>2</vcpu>
	I1007 12:32:10.954998  766330 main.go:141] libmachine: (ha-053933-m02)   <features>
	I1007 12:32:10.955008  766330 main.go:141] libmachine: (ha-053933-m02)     <acpi/>
	I1007 12:32:10.955019  766330 main.go:141] libmachine: (ha-053933-m02)     <apic/>
	I1007 12:32:10.955028  766330 main.go:141] libmachine: (ha-053933-m02)     <pae/>
	I1007 12:32:10.955038  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955048  766330 main.go:141] libmachine: (ha-053933-m02)   </features>
	I1007 12:32:10.955059  766330 main.go:141] libmachine: (ha-053933-m02)   <cpu mode='host-passthrough'>
	I1007 12:32:10.955086  766330 main.go:141] libmachine: (ha-053933-m02)   
	I1007 12:32:10.955107  766330 main.go:141] libmachine: (ha-053933-m02)   </cpu>
	I1007 12:32:10.955118  766330 main.go:141] libmachine: (ha-053933-m02)   <os>
	I1007 12:32:10.955130  766330 main.go:141] libmachine: (ha-053933-m02)     <type>hvm</type>
	I1007 12:32:10.955144  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='cdrom'/>
	I1007 12:32:10.955153  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='hd'/>
	I1007 12:32:10.955164  766330 main.go:141] libmachine: (ha-053933-m02)     <bootmenu enable='no'/>
	I1007 12:32:10.955170  766330 main.go:141] libmachine: (ha-053933-m02)   </os>
	I1007 12:32:10.955176  766330 main.go:141] libmachine: (ha-053933-m02)   <devices>
	I1007 12:32:10.955183  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='cdrom'>
	I1007 12:32:10.955199  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/boot2docker.iso'/>
	I1007 12:32:10.955214  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:32:10.955226  766330 main.go:141] libmachine: (ha-053933-m02)       <readonly/>
	I1007 12:32:10.955236  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955247  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='disk'>
	I1007 12:32:10.955259  766330 main.go:141] libmachine: (ha-053933-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:32:10.955273  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk'/>
	I1007 12:32:10.955284  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:32:10.955295  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955317  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955337  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='mk-ha-053933'/>
	I1007 12:32:10.955355  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955372  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955385  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955397  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='default'/>
	I1007 12:32:10.955410  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955419  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955429  766330 main.go:141] libmachine: (ha-053933-m02)     <serial type='pty'>
	I1007 12:32:10.955444  766330 main.go:141] libmachine: (ha-053933-m02)       <target port='0'/>
	I1007 12:32:10.955456  766330 main.go:141] libmachine: (ha-053933-m02)     </serial>
	I1007 12:32:10.955483  766330 main.go:141] libmachine: (ha-053933-m02)     <console type='pty'>
	I1007 12:32:10.955500  766330 main.go:141] libmachine: (ha-053933-m02)       <target type='serial' port='0'/>
	I1007 12:32:10.955516  766330 main.go:141] libmachine: (ha-053933-m02)     </console>
	I1007 12:32:10.955528  766330 main.go:141] libmachine: (ha-053933-m02)     <rng model='virtio'>
	I1007 12:32:10.955541  766330 main.go:141] libmachine: (ha-053933-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:32:10.955552  766330 main.go:141] libmachine: (ha-053933-m02)     </rng>
	I1007 12:32:10.955562  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955574  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955588  766330 main.go:141] libmachine: (ha-053933-m02)   </devices>
	I1007 12:32:10.955599  766330 main.go:141] libmachine: (ha-053933-m02) </domain>
	I1007 12:32:10.955606  766330 main.go:141] libmachine: (ha-053933-m02) 
	I1007 12:32:10.964084  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:92:85:a0 in network default
	I1007 12:32:10.964913  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring networks are active...
	I1007 12:32:10.964943  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:10.966004  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network default is active
	I1007 12:32:10.966794  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network mk-ha-053933 is active
	I1007 12:32:10.967567  766330 main.go:141] libmachine: (ha-053933-m02) Getting domain xml...
	I1007 12:32:10.968704  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:11.328435  766330 main.go:141] libmachine: (ha-053933-m02) Waiting to get IP...
	I1007 12:32:11.329255  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.329657  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.329684  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.329635  766716 retry.go:31] will retry after 304.626046ms: waiting for machine to come up
	I1007 12:32:11.636452  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.636889  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.636919  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.636838  766716 retry.go:31] will retry after 276.587443ms: waiting for machine to come up
	I1007 12:32:11.915507  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.915953  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.915981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.915913  766716 retry.go:31] will retry after 337.132979ms: waiting for machine to come up
	I1007 12:32:12.254562  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.255006  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.255031  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.254957  766716 retry.go:31] will retry after 414.173139ms: waiting for machine to come up
	I1007 12:32:12.670554  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.670981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.671027  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.670964  766716 retry.go:31] will retry after 736.75735ms: waiting for machine to come up
	I1007 12:32:13.409001  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:13.409465  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:13.409492  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:13.409419  766716 retry.go:31] will retry after 877.012423ms: waiting for machine to come up
	I1007 12:32:14.288329  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:14.288723  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:14.288753  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:14.288684  766716 retry.go:31] will retry after 1.037556164s: waiting for machine to come up
	I1007 12:32:15.327401  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:15.327809  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:15.327836  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:15.327768  766716 retry.go:31] will retry after 1.075590546s: waiting for machine to come up
	I1007 12:32:16.404635  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:16.405141  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:16.405170  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:16.405088  766716 retry.go:31] will retry after 1.694642723s: waiting for machine to come up
	I1007 12:32:18.101812  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:18.102290  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:18.102307  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:18.102257  766716 retry.go:31] will retry after 2.246296895s: waiting for machine to come up
	I1007 12:32:20.351742  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:20.352251  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:20.352273  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:20.352201  766716 retry.go:31] will retry after 2.298110151s: waiting for machine to come up
	I1007 12:32:22.653604  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:22.654280  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:22.654305  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:22.654158  766716 retry.go:31] will retry after 3.347094149s: waiting for machine to come up
	I1007 12:32:26.003104  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:26.003592  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:26.003618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:26.003545  766716 retry.go:31] will retry after 3.946300567s: waiting for machine to come up
	I1007 12:32:29.951184  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:29.951661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:29.951683  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:29.951615  766716 retry.go:31] will retry after 4.942604939s: waiting for machine to come up
	I1007 12:32:34.900038  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900804  766330 main.go:141] libmachine: (ha-053933-m02) Found IP for machine: 192.168.39.227
	I1007 12:32:34.900839  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900847  766330 main.go:141] libmachine: (ha-053933-m02) Reserving static IP address...
	I1007 12:32:34.901345  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "ha-053933-m02", mac: "52:54:00:e8:71:ec", ip: "192.168.39.227"} in network mk-ha-053933
	I1007 12:32:34.989559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:34.989593  766330 main.go:141] libmachine: (ha-053933-m02) Reserved static IP address: 192.168.39.227
	I1007 12:32:34.989607  766330 main.go:141] libmachine: (ha-053933-m02) Waiting for SSH to be available...
	I1007 12:32:34.993000  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.993348  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933
	I1007 12:32:34.993372  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:e8:71:ec
	I1007 12:32:34.993535  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:34.993565  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:34.993595  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:34.993608  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:34.993625  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:34.997438  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:32:34.997462  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:32:34.997471  766330 main.go:141] libmachine: (ha-053933-m02) DBG | command : exit 0
	I1007 12:32:34.997493  766330 main.go:141] libmachine: (ha-053933-m02) DBG | err     : exit status 255
	I1007 12:32:34.997502  766330 main.go:141] libmachine: (ha-053933-m02) DBG | output  : 
	I1007 12:32:38.000138  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:38.003563  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.003934  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.003965  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.004068  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:38.004097  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:38.004133  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:38.004156  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:38.004198  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:38.134356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:32:38.134575  766330 main.go:141] libmachine: (ha-053933-m02) KVM machine creation complete!
	I1007 12:32:38.134919  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:38.135497  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135718  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135838  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:32:38.135854  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetState
	I1007 12:32:38.137125  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:32:38.137139  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:32:38.137144  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:32:38.137149  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.139531  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140008  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.140029  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140173  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.140353  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140459  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.140739  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.140945  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.140955  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:32:38.245844  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.245874  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:32:38.245883  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.249067  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249461  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.249482  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249773  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.249996  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250184  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250363  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.250493  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.250691  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.250704  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:32:38.363524  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:32:38.363625  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:32:38.363640  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:32:38.363656  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364053  766330 buildroot.go:166] provisioning hostname "ha-053933-m02"
	I1007 12:32:38.364084  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364321  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.367546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368073  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.368107  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368323  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.368535  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368704  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368874  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.369073  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.369311  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.369326  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m02 && echo "ha-053933-m02" | sudo tee /etc/hostname
	I1007 12:32:38.493958  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m02
	
	I1007 12:32:38.493990  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.496774  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497161  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.497193  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497352  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.497571  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497746  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497916  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.498140  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.498312  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.498329  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:32:38.616208  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.616246  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:32:38.616266  766330 buildroot.go:174] setting up certificates
	I1007 12:32:38.616276  766330 provision.go:84] configureAuth start
	I1007 12:32:38.616286  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.616609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:38.619075  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619398  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.619427  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619572  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.621757  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622105  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.622129  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622285  766330 provision.go:143] copyHostCerts
	I1007 12:32:38.622318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622352  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:32:38.622361  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622432  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:32:38.622511  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622529  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:32:38.622535  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622558  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:32:38.622599  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622622  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:32:38.622630  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622663  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:32:38.622733  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m02 san=[127.0.0.1 192.168.39.227 ha-053933-m02 localhost minikube]
	I1007 12:32:38.708452  766330 provision.go:177] copyRemoteCerts
	I1007 12:32:38.708528  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:32:38.708564  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.710962  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711285  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.711318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711472  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.711655  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.711820  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.711918  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:38.799093  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:32:38.799174  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:32:38.827105  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:32:38.827188  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:32:38.854871  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:32:38.854953  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:32:38.882148  766330 provision.go:87] duration metric: took 265.856123ms to configureAuth
	I1007 12:32:38.882180  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:32:38.882387  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:38.882485  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.885151  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885511  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.885545  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885761  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.885978  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886151  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886344  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.886506  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.886695  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.886715  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:32:39.128135  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:32:39.128167  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:32:39.128176  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetURL
	I1007 12:32:39.129618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using libvirt version 6000000
	I1007 12:32:39.132019  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132387  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.132415  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132625  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:32:39.132640  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:32:39.132647  766330 client.go:171] duration metric: took 28.73104158s to LocalClient.Create
	I1007 12:32:39.132672  766330 start.go:167] duration metric: took 28.731111532s to libmachine.API.Create "ha-053933"
	I1007 12:32:39.132682  766330 start.go:293] postStartSetup for "ha-053933-m02" (driver="kvm2")
	I1007 12:32:39.132692  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:32:39.132710  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.132980  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:32:39.133017  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.135744  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136124  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.136167  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136341  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.136530  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.136675  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.136835  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.221605  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:32:39.226484  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:32:39.226514  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:32:39.226584  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:32:39.226655  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:32:39.226665  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:32:39.226746  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:32:39.237427  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:39.261998  766330 start.go:296] duration metric: took 129.301228ms for postStartSetup
	I1007 12:32:39.262093  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:39.262719  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.265384  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.265792  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.265819  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.266155  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:39.266397  766330 start.go:128] duration metric: took 28.884542194s to createHost
	I1007 12:32:39.266428  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.268718  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.268995  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.269035  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.269138  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.269298  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269463  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269575  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.269703  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:39.269878  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:39.269888  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:32:39.379504  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304359.360836408
	
	I1007 12:32:39.379530  766330 fix.go:216] guest clock: 1728304359.360836408
	I1007 12:32:39.379539  766330 fix.go:229] Guest: 2024-10-07 12:32:39.360836408 +0000 UTC Remote: 2024-10-07 12:32:39.26641087 +0000 UTC m=+81.160941412 (delta=94.425538ms)
	I1007 12:32:39.379557  766330 fix.go:200] guest clock delta is within tolerance: 94.425538ms
	I1007 12:32:39.379562  766330 start.go:83] releasing machines lock for "ha-053933-m02", held for 28.997822917s
	I1007 12:32:39.379579  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.379889  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.383410  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.383763  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.383796  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.386874  766330 out.go:177] * Found network options:
	I1007 12:32:39.388989  766330 out.go:177]   - NO_PROXY=192.168.39.152
	W1007 12:32:39.390421  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.390479  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391270  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391484  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391605  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:32:39.391666  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	W1007 12:32:39.391801  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.391871  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:32:39.391887  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.394867  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.394901  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395284  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395339  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395674  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395681  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395918  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.395928  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.396088  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396100  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396238  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.396245  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.642441  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:32:39.649674  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:32:39.649767  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:32:39.666653  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:32:39.666687  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:32:39.666767  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:32:39.684589  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:32:39.700168  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:32:39.700231  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:32:39.716005  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:32:39.731764  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:32:39.862714  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:32:40.011007  766330 docker.go:233] disabling docker service ...
	I1007 12:32:40.011096  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:32:40.027322  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:32:40.041607  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:32:40.187585  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:32:40.331438  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:32:40.347382  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:32:40.367495  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:32:40.367556  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.379748  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:32:40.379840  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.391760  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.403745  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.415505  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:32:40.428366  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.441667  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.460916  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.473748  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:32:40.485573  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:32:40.485645  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:32:40.500703  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:32:40.512028  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:40.646960  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:32:40.739246  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:32:40.739338  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:32:40.744292  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:32:40.744359  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:32:40.748439  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:32:40.790232  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:32:40.790320  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.827829  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.860461  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:32:40.862462  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:32:40.864274  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:40.867846  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868296  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:40.868323  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868742  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:32:40.873673  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:40.887367  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:32:40.887606  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:40.887888  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.887931  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.903464  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1007 12:32:40.903898  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.904410  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.904433  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.904903  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.905134  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:40.906904  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:40.907228  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.907278  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.922960  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I1007 12:32:40.923502  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.924055  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.924078  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.924407  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.924586  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:40.924737  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.227
	I1007 12:32:40.924756  766330 certs.go:194] generating shared ca certs ...
	I1007 12:32:40.924778  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:40.924946  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:32:40.925010  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:32:40.925020  766330 certs.go:256] generating profile certs ...
	I1007 12:32:40.925169  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:32:40.925208  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90
	I1007 12:32:40.925226  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.254]
	I1007 12:32:41.148971  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 ...
	I1007 12:32:41.149006  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90: {Name:mkfc72ac98e5f64b1efa978f83502cc26e6b00b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149188  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 ...
	I1007 12:32:41.149202  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90: {Name:mkb6d827b308c96cc8f5173b1a5723adff201a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149277  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:32:41.149418  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:32:41.149564  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:32:41.149589  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:32:41.149603  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:32:41.149618  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:32:41.149632  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:32:41.149645  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:32:41.149658  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:32:41.149670  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:32:41.149681  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:32:41.149730  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:32:41.149764  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:32:41.149774  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:32:41.149801  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:32:41.149822  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:32:41.149848  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:32:41.149885  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:41.149911  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.149925  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.149937  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.149971  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:41.153293  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153635  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:41.153659  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:41.154192  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:41.154376  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:41.154520  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:41.226577  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:32:41.232730  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:32:41.245060  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:32:41.251197  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:32:41.264593  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:32:41.269517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:32:41.281560  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:32:41.286754  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:32:41.299707  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:32:41.304594  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:32:41.317916  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:32:41.323393  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:32:41.336013  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:32:41.366179  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:32:41.393458  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:32:41.419874  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:32:41.447814  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:32:41.474678  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:32:41.500522  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:32:41.527411  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:32:41.552513  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:32:41.576732  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:32:41.602701  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:32:41.628143  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:32:41.644998  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:32:41.662248  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:32:41.679785  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:32:41.698239  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:32:41.717010  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:32:41.735412  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:32:41.753557  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:32:41.759787  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:32:41.771601  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776332  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776414  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.782579  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:32:41.793992  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:32:41.806293  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811220  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811296  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.817656  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:32:41.829292  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:32:41.840880  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845905  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845988  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.852343  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:32:41.864190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:32:41.868675  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:32:41.868747  766330 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I1007 12:32:41.868844  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:32:41.868868  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:32:41.868905  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:32:41.889715  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:32:41.889813  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:32:41.889876  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.901277  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:32:41.901344  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.911928  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:32:41.911964  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912020  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912066  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:32:41.912079  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:32:41.917061  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:32:41.917099  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:32:42.483195  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.483287  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.490132  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:32:42.490184  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:32:42.569436  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:32:42.620637  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.620740  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.635485  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:32:42.635527  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:32:43.157634  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:32:43.168142  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:32:43.185353  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:32:43.203562  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:32:43.222930  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:32:43.227330  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:43.240979  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:43.377709  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:32:43.396837  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:43.397301  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:43.397366  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:43.414130  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1007 12:32:43.414696  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:43.415312  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:43.415338  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:43.415686  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:43.415901  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:43.416102  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:32:43.416222  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:32:43.416248  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:43.419194  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419695  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:43.419728  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419860  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:43.420045  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:43.420225  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:43.420387  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:43.569631  766330 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:43.569697  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I1007 12:33:05.382098  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (21.812371374s)
	I1007 12:33:05.382136  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:33:05.983459  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m02 minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:33:06.136889  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:33:06.286153  766330 start.go:319] duration metric: took 22.870046293s to joinCluster
	I1007 12:33:06.286246  766330 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:06.286558  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:06.288312  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:33:06.290220  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:06.583421  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:06.686534  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:33:06.686755  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:33:06.686819  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:33:06.687163  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:06.687340  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:06.687357  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:06.687368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:06.687373  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:06.711245  766330 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1007 12:33:07.188212  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.188242  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.188255  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.188274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.191359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:07.688452  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.688484  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.688497  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.688502  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.808189  766330 round_trippers.go:574] Response Status: 200 OK in 119 milliseconds
	I1007 12:33:08.187451  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.187480  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.187491  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.187496  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.191935  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:08.687677  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.687701  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.687711  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.687719  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.690915  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:08.691670  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:09.188237  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.188270  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.188281  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.188289  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.194158  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:09.687515  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.687547  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.687557  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.687562  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.690808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.188360  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.188385  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.188394  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.188400  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.191880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.688056  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.688084  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.688096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.688104  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.691003  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:11.188165  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.188195  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.188206  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.188211  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.191751  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:11.192284  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:11.687697  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.687733  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.687744  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.687751  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.692471  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:12.187925  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.187959  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.187971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.187977  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.191580  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:12.687588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.687620  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.687631  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.687637  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.691690  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:13.187912  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.187949  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.187959  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.187964  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.191046  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.688329  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.688359  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.688370  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.688374  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.692160  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.692713  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:14.188174  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.188198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.188207  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.188210  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.197312  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:33:14.688323  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.688353  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.688364  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.688369  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.692255  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.188273  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.188299  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.188309  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.188312  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.191633  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.688194  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.688221  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.688229  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.688233  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.691201  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:16.188087  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.188118  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.188130  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.188136  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.191654  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:16.192613  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:16.688084  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.688116  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.688127  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.688131  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.691196  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.188046  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.188079  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.188091  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.188099  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.191563  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.687488  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.687515  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.687523  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.687527  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.692225  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:18.187466  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.187496  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.187508  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.187513  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.190916  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.688169  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.688198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.688209  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.688214  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.691684  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.692180  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:19.188410  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.188443  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.188455  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.188461  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.191778  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:19.687861  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.687898  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.687909  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.687918  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.692517  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.187370  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.187394  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.187404  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.187409  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.190680  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.688383  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.688409  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.688418  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.688422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.692411  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.692972  766330 node_ready.go:49] node "ha-053933-m02" has status "Ready":"True"
	I1007 12:33:20.692999  766330 node_ready.go:38] duration metric: took 14.005807631s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:20.693012  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:20.693143  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:20.693154  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.693162  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.693165  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.697361  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.703660  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.703786  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:33:20.703796  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.703803  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.703807  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.707181  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.708043  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.708061  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.708069  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.708074  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.710812  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.711426  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.711448  766330 pod_ready.go:82] duration metric: took 7.751816ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711460  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:33:20.711534  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.711542  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.711545  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.714909  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.715901  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.715918  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.715927  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.715934  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.719555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.720647  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.720668  766330 pod_ready.go:82] duration metric: took 9.201382ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720679  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720751  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:33:20.720759  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.720768  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.720773  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.723495  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.724196  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.724215  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.724226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.724229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.726952  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.727595  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.727616  766330 pod_ready.go:82] duration metric: took 6.930211ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727627  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727692  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:20.727700  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.727714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.727718  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.731049  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.731750  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.731766  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.731786  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.731793  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.734880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.228231  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.228260  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.228274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.228281  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.231667  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.232387  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.232407  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.232416  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.232422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.235588  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.728588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.728616  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.728628  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.728635  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.732106  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.732770  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.732786  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.732795  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.732798  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.735773  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.228683  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:22.228711  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.228720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.228724  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232193  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.232808  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.232825  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.232834  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232839  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.235792  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.236315  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.236338  766330 pod_ready.go:82] duration metric: took 1.508704734s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236354  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236419  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:33:22.236427  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.236434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.236438  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.239818  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.288880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:22.288905  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.288915  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.288920  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.292489  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.293074  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.293096  766330 pod_ready.go:82] duration metric: took 56.735786ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.293107  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.488539  766330 request.go:632] Waited for 195.305457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488616  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488627  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.488640  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.488646  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.492086  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.688457  766330 request.go:632] Waited for 195.312015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688532  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688537  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.688546  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.688550  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.691998  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.692647  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.692670  766330 pod_ready.go:82] duration metric: took 399.55659ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.692683  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.888729  766330 request.go:632] Waited for 195.939419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888840  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888849  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.888862  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.888872  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.892505  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.088565  766330 request.go:632] Waited for 195.365241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088643  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088651  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.088662  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.088670  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.091637  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.092259  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.092277  766330 pod_ready.go:82] duration metric: took 399.588182ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.092289  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.289099  766330 request.go:632] Waited for 196.721146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289204  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289216  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.289227  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.289236  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.292352  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.488835  766330 request.go:632] Waited for 195.58765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488907  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488912  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.488920  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.488925  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.491857  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.492343  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.492364  766330 pod_ready.go:82] duration metric: took 400.067435ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.492375  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.688407  766330 request.go:632] Waited for 195.943093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688521  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688529  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.688538  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.688543  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.692233  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.888501  766330 request.go:632] Waited for 195.323816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888614  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888622  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.888633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.888639  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.892680  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:23.893104  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.893123  766330 pod_ready.go:82] duration metric: took 400.740542ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.893133  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.089301  766330 request.go:632] Waited for 196.068782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089368  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089374  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.089388  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.089395  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.092648  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.288647  766330 request.go:632] Waited for 195.319776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288759  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288778  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.288794  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.288805  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.292348  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.292959  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.292988  766330 pod_ready.go:82] duration metric: took 399.844819ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.293007  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.489072  766330 request.go:632] Waited for 195.96428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489149  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489157  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.489167  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.489175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.492662  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.688896  766330 request.go:632] Waited for 195.439422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689009  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689017  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.689029  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.689035  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.692350  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.692962  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.692988  766330 pod_ready.go:82] duration metric: took 399.970822ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.693003  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.889214  766330 request.go:632] Waited for 196.093786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889300  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889309  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.889322  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.889329  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.892619  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.088740  766330 request.go:632] Waited for 195.405391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088815  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088821  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.088831  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.088837  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.092543  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.093141  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:25.093166  766330 pod_ready.go:82] duration metric: took 400.155132ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:25.093183  766330 pod_ready.go:39] duration metric: took 4.400126454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:25.093213  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:33:25.093283  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:33:25.111694  766330 api_server.go:72] duration metric: took 18.825401123s to wait for apiserver process to appear ...
	I1007 12:33:25.111735  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:33:25.111762  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:33:25.118517  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:33:25.118624  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:33:25.118639  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.118651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.118656  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.119598  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:33:25.119715  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:33:25.119734  766330 api_server.go:131] duration metric: took 7.991573ms to wait for apiserver health ...
	I1007 12:33:25.119743  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:33:25.289166  766330 request.go:632] Waited for 169.340781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289250  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289255  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.289263  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.289268  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.295241  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.299874  766330 system_pods.go:59] 17 kube-system pods found
	I1007 12:33:25.299914  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.299919  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.299923  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.299926  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.299929  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.299933  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.299938  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.299941  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.299944  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.299947  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.299950  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.299953  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.299956  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.299959  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.299962  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.300005  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.300042  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.300050  766330 system_pods.go:74] duration metric: took 180.300279ms to wait for pod list to return data ...
	I1007 12:33:25.300061  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:33:25.489349  766330 request.go:632] Waited for 189.154197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489422  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489429  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.489441  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.489451  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.493783  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.494042  766330 default_sa.go:45] found service account: "default"
	I1007 12:33:25.494060  766330 default_sa.go:55] duration metric: took 193.9912ms for default service account to be created ...
	I1007 12:33:25.494070  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:33:25.688474  766330 request.go:632] Waited for 194.303496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688554  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688560  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.688568  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.688572  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.694194  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.700121  766330 system_pods.go:86] 17 kube-system pods found
	I1007 12:33:25.700159  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.700167  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.700179  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.700185  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.700191  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.700196  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.700202  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.700207  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.700213  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.700218  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.700223  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.700228  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.700233  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.700242  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.700248  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.700255  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.700258  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.700266  766330 system_pods.go:126] duration metric: took 206.189927ms to wait for k8s-apps to be running ...
	I1007 12:33:25.700277  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:33:25.700338  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:25.716873  766330 system_svc.go:56] duration metric: took 16.577644ms WaitForService to wait for kubelet
	I1007 12:33:25.716918  766330 kubeadm.go:582] duration metric: took 19.430632885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:33:25.716946  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:33:25.889445  766330 request.go:632] Waited for 172.381554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889527  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889535  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.889543  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.889547  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.893637  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.894406  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894446  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894466  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894476  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894483  766330 node_conditions.go:105] duration metric: took 177.530833ms to run NodePressure ...
	I1007 12:33:25.894499  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:33:25.894527  766330 start.go:255] writing updated cluster config ...
	I1007 12:33:25.896984  766330 out.go:201] 
	I1007 12:33:25.898622  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:25.898739  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.900470  766330 out.go:177] * Starting "ha-053933-m03" control-plane node in "ha-053933" cluster
	I1007 12:33:25.901744  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:33:25.901777  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:33:25.901887  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:33:25.901898  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:33:25.901996  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.902210  766330 start.go:360] acquireMachinesLock for ha-053933-m03: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:33:25.902261  766330 start.go:364] duration metric: took 29.008µs to acquireMachinesLock for "ha-053933-m03"
	I1007 12:33:25.902279  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:25.902373  766330 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:33:25.903871  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:33:25.903977  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:25.904021  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:25.919504  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36877
	I1007 12:33:25.920002  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:25.920499  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:25.920525  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:25.920897  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:25.921112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:25.921261  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:25.921411  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:33:25.921445  766330 client.go:168] LocalClient.Create starting
	I1007 12:33:25.921486  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:33:25.921530  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921554  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921635  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:33:25.921664  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921680  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921706  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:33:25.921718  766330 main.go:141] libmachine: (ha-053933-m03) Calling .PreCreateCheck
	I1007 12:33:25.921884  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:25.922300  766330 main.go:141] libmachine: Creating machine...
	I1007 12:33:25.922316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .Create
	I1007 12:33:25.922510  766330 main.go:141] libmachine: (ha-053933-m03) Creating KVM machine...
	I1007 12:33:25.923845  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing default KVM network
	I1007 12:33:25.924001  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing private KVM network mk-ha-053933
	I1007 12:33:25.924170  766330 main.go:141] libmachine: (ha-053933-m03) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:25.924210  766330 main.go:141] libmachine: (ha-053933-m03) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:33:25.924298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:25.924182  767113 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:25.924373  766330 main.go:141] libmachine: (ha-053933-m03) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:33:26.206977  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.206809  767113 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa...
	I1007 12:33:26.524415  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524231  767113 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk...
	I1007 12:33:26.524455  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing magic tar header
	I1007 12:33:26.524470  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing SSH key tar header
	I1007 12:33:26.524482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524376  767113 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:26.524496  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03
	I1007 12:33:26.524534  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 (perms=drwx------)
	I1007 12:33:26.524574  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:33:26.524585  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:33:26.524600  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:33:26.524609  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:33:26.524638  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:26.524653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:33:26.524661  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:33:26.524670  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:33:26.524678  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home
	I1007 12:33:26.524693  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Skipping /home - not owner
	I1007 12:33:26.524703  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:33:26.524718  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:33:26.524726  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.525722  766330 main.go:141] libmachine: (ha-053933-m03) define libvirt domain using xml: 
	I1007 12:33:26.525747  766330 main.go:141] libmachine: (ha-053933-m03) <domain type='kvm'>
	I1007 12:33:26.525776  766330 main.go:141] libmachine: (ha-053933-m03)   <name>ha-053933-m03</name>
	I1007 12:33:26.525795  766330 main.go:141] libmachine: (ha-053933-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:33:26.525808  766330 main.go:141] libmachine: (ha-053933-m03)   <vcpu>2</vcpu>
	I1007 12:33:26.525818  766330 main.go:141] libmachine: (ha-053933-m03)   <features>
	I1007 12:33:26.525830  766330 main.go:141] libmachine: (ha-053933-m03)     <acpi/>
	I1007 12:33:26.525838  766330 main.go:141] libmachine: (ha-053933-m03)     <apic/>
	I1007 12:33:26.525850  766330 main.go:141] libmachine: (ha-053933-m03)     <pae/>
	I1007 12:33:26.525859  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.525905  766330 main.go:141] libmachine: (ha-053933-m03)   </features>
	I1007 12:33:26.525934  766330 main.go:141] libmachine: (ha-053933-m03)   <cpu mode='host-passthrough'>
	I1007 12:33:26.525945  766330 main.go:141] libmachine: (ha-053933-m03)   
	I1007 12:33:26.525955  766330 main.go:141] libmachine: (ha-053933-m03)   </cpu>
	I1007 12:33:26.525965  766330 main.go:141] libmachine: (ha-053933-m03)   <os>
	I1007 12:33:26.525971  766330 main.go:141] libmachine: (ha-053933-m03)     <type>hvm</type>
	I1007 12:33:26.525976  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='cdrom'/>
	I1007 12:33:26.525983  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='hd'/>
	I1007 12:33:26.525988  766330 main.go:141] libmachine: (ha-053933-m03)     <bootmenu enable='no'/>
	I1007 12:33:26.525995  766330 main.go:141] libmachine: (ha-053933-m03)   </os>
	I1007 12:33:26.526002  766330 main.go:141] libmachine: (ha-053933-m03)   <devices>
	I1007 12:33:26.526013  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='cdrom'>
	I1007 12:33:26.526054  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/boot2docker.iso'/>
	I1007 12:33:26.526067  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:33:26.526077  766330 main.go:141] libmachine: (ha-053933-m03)       <readonly/>
	I1007 12:33:26.526087  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526099  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='disk'>
	I1007 12:33:26.526109  766330 main.go:141] libmachine: (ha-053933-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:33:26.526124  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk'/>
	I1007 12:33:26.526142  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:33:26.526153  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526162  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526172  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='mk-ha-053933'/>
	I1007 12:33:26.526180  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526189  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526201  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526212  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='default'/>
	I1007 12:33:26.526219  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526233  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526252  766330 main.go:141] libmachine: (ha-053933-m03)     <serial type='pty'>
	I1007 12:33:26.526271  766330 main.go:141] libmachine: (ha-053933-m03)       <target port='0'/>
	I1007 12:33:26.526293  766330 main.go:141] libmachine: (ha-053933-m03)     </serial>
	I1007 12:33:26.526317  766330 main.go:141] libmachine: (ha-053933-m03)     <console type='pty'>
	I1007 12:33:26.526331  766330 main.go:141] libmachine: (ha-053933-m03)       <target type='serial' port='0'/>
	I1007 12:33:26.526341  766330 main.go:141] libmachine: (ha-053933-m03)     </console>
	I1007 12:33:26.526352  766330 main.go:141] libmachine: (ha-053933-m03)     <rng model='virtio'>
	I1007 12:33:26.526364  766330 main.go:141] libmachine: (ha-053933-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:33:26.526375  766330 main.go:141] libmachine: (ha-053933-m03)     </rng>
	I1007 12:33:26.526382  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526387  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526400  766330 main.go:141] libmachine: (ha-053933-m03)   </devices>
	I1007 12:33:26.526412  766330 main.go:141] libmachine: (ha-053933-m03) </domain>
	I1007 12:33:26.526422  766330 main.go:141] libmachine: (ha-053933-m03) 
	I1007 12:33:26.533781  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:c6:4c:5a in network default
	I1007 12:33:26.534377  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring networks are active...
	I1007 12:33:26.534401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.535036  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network default is active
	I1007 12:33:26.535318  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network mk-ha-053933 is active
	I1007 12:33:26.535654  766330 main.go:141] libmachine: (ha-053933-m03) Getting domain xml...
	I1007 12:33:26.536349  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.886582  766330 main.go:141] libmachine: (ha-053933-m03) Waiting to get IP...
	I1007 12:33:26.887435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.887805  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:26.887834  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.887787  767113 retry.go:31] will retry after 278.405187ms: waiting for machine to come up
	I1007 12:33:27.168357  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.168978  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.169005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.168920  767113 retry.go:31] will retry after 329.830323ms: waiting for machine to come up
	I1007 12:33:27.500231  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.500684  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.500728  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.500604  767113 retry.go:31] will retry after 372.653315ms: waiting for machine to come up
	I1007 12:33:27.875190  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.875624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.875654  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.875577  767113 retry.go:31] will retry after 444.943717ms: waiting for machine to come up
	I1007 12:33:28.322485  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.322945  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.322970  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.322909  767113 retry.go:31] will retry after 669.257582ms: waiting for machine to come up
	I1007 12:33:28.994144  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.994697  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.994715  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.994632  767113 retry.go:31] will retry after 733.137025ms: waiting for machine to come up
	I1007 12:33:29.729782  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:29.730264  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:29.730293  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:29.730214  767113 retry.go:31] will retry after 899.738353ms: waiting for machine to come up
	I1007 12:33:30.632328  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:30.632890  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:30.632916  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:30.632809  767113 retry.go:31] will retry after 931.890845ms: waiting for machine to come up
	I1007 12:33:31.566008  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:31.566423  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:31.566453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:31.566382  767113 retry.go:31] will retry after 1.324143868s: waiting for machine to come up
	I1007 12:33:32.892206  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:32.892600  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:32.892624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:32.892560  767113 retry.go:31] will retry after 1.884957277s: waiting for machine to come up
	I1007 12:33:34.779972  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:34.780414  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:34.780482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:34.780403  767113 retry.go:31] will retry after 2.797940617s: waiting for machine to come up
	I1007 12:33:37.580503  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:37.580938  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:37.581017  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:37.580916  767113 retry.go:31] will retry after 3.450180083s: waiting for machine to come up
	I1007 12:33:41.032804  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:41.033196  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:41.033227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:41.033144  767113 retry.go:31] will retry after 3.620491508s: waiting for machine to come up
	I1007 12:33:44.657262  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:44.657724  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:44.657749  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:44.657677  767113 retry.go:31] will retry after 4.652577623s: waiting for machine to come up
	I1007 12:33:49.314220  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314598  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314619  766330 main.go:141] libmachine: (ha-053933-m03) Found IP for machine: 192.168.39.53
	I1007 12:33:49.314644  766330 main.go:141] libmachine: (ha-053933-m03) Reserving static IP address...
	I1007 12:33:49.315014  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "ha-053933-m03", mac: "52:54:00:92:71:bc", ip: "192.168.39.53"} in network mk-ha-053933
	I1007 12:33:49.395618  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:49.395664  766330 main.go:141] libmachine: (ha-053933-m03) Reserved static IP address: 192.168.39.53
	I1007 12:33:49.395679  766330 main.go:141] libmachine: (ha-053933-m03) Waiting for SSH to be available...
	I1007 12:33:49.398571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.398960  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933
	I1007 12:33:49.398990  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:92:71:bc
	I1007 12:33:49.399160  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:49.399184  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:49.399214  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:49.399227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:49.399241  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:49.403005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:33:49.403027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:33:49.403035  766330 main.go:141] libmachine: (ha-053933-m03) DBG | command : exit 0
	I1007 12:33:49.403039  766330 main.go:141] libmachine: (ha-053933-m03) DBG | err     : exit status 255
	I1007 12:33:49.403074  766330 main.go:141] libmachine: (ha-053933-m03) DBG | output  : 
	I1007 12:33:52.403247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:52.406252  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.406668  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.406699  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.407002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:52.407027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:52.407053  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:52.407069  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:52.407109  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:52.534915  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:33:52.535288  766330 main.go:141] libmachine: (ha-053933-m03) KVM machine creation complete!
	I1007 12:33:52.535635  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:52.536389  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536639  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536874  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:33:52.536891  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetState
	I1007 12:33:52.538444  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:33:52.538462  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:33:52.538469  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:33:52.538476  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.541542  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.541939  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.541963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.542112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.542296  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542481  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542677  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.542861  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.543138  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.543151  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:33:52.649741  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:52.649782  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:33:52.649794  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.652589  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.652969  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.653002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.653140  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.653374  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653551  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653673  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.653873  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.654072  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.654084  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:33:52.759715  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:33:52.759834  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:33:52.759854  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:33:52.759868  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760230  766330 buildroot.go:166] provisioning hostname "ha-053933-m03"
	I1007 12:33:52.760268  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760500  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.763370  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.763827  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.763857  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.764033  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.764271  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764477  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764633  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.764776  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.764967  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.764978  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m03 && echo "ha-053933-m03" | sudo tee /etc/hostname
	I1007 12:33:52.887558  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m03
	
	I1007 12:33:52.887587  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.890785  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.891281  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891393  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.891600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.891855  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.892166  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.892433  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.892634  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.892651  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:33:53.009149  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:53.009337  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:33:53.009478  766330 buildroot.go:174] setting up certificates
	I1007 12:33:53.009552  766330 provision.go:84] configureAuth start
	I1007 12:33:53.009602  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:53.009986  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.012616  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.012988  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.013047  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.013159  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.015298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015632  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.015653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015824  766330 provision.go:143] copyHostCerts
	I1007 12:33:53.015867  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.015916  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:33:53.015927  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.016009  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:33:53.016125  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016152  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:33:53.016162  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016198  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:33:53.016272  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016302  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:33:53.016310  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016353  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:33:53.016436  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m03 san=[127.0.0.1 192.168.39.53 ha-053933-m03 localhost minikube]
	I1007 12:33:53.275511  766330 provision.go:177] copyRemoteCerts
	I1007 12:33:53.275578  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:33:53.275609  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.278571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.278958  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.278997  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.279237  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.279470  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.279694  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.279856  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.365609  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:33:53.365705  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:33:53.394108  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:33:53.394203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:33:53.421846  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:33:53.421930  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:33:53.448310  766330 provision.go:87] duration metric: took 438.733854ms to configureAuth
	I1007 12:33:53.448346  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:33:53.448616  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:53.448711  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.451435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.451928  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.451963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.452102  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.452316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452472  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452605  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.452784  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.452957  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.452971  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:33:53.686714  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:33:53.686753  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:33:53.686762  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetURL
	I1007 12:33:53.688034  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using libvirt version 6000000
	I1007 12:33:53.690553  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691049  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.691081  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691275  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:33:53.691309  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:33:53.691317  766330 client.go:171] duration metric: took 27.769860907s to LocalClient.Create
	I1007 12:33:53.691347  766330 start.go:167] duration metric: took 27.76993753s to libmachine.API.Create "ha-053933"
	I1007 12:33:53.691356  766330 start.go:293] postStartSetup for "ha-053933-m03" (driver="kvm2")
	I1007 12:33:53.691366  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:33:53.691384  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.691657  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:33:53.691683  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.693729  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694161  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.694191  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694359  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.694535  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.694715  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.694900  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.777573  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:33:53.782595  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:33:53.782625  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:33:53.782710  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:33:53.782804  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:33:53.782816  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:33:53.782918  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:33:53.793716  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:53.819127  766330 start.go:296] duration metric: took 127.75028ms for postStartSetup
	I1007 12:33:53.819228  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:53.819965  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.822875  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823288  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.823318  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823585  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:53.823804  766330 start.go:128] duration metric: took 27.921419624s to createHost
	I1007 12:33:53.823830  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.826389  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826755  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.826788  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826991  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.827187  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827532  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.827708  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.827909  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.827922  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:33:53.935241  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304433.915881343
	
	I1007 12:33:53.935272  766330 fix.go:216] guest clock: 1728304433.915881343
	I1007 12:33:53.935282  766330 fix.go:229] Guest: 2024-10-07 12:33:53.915881343 +0000 UTC Remote: 2024-10-07 12:33:53.823818192 +0000 UTC m=+155.718348733 (delta=92.063151ms)
	I1007 12:33:53.935303  766330 fix.go:200] guest clock delta is within tolerance: 92.063151ms
	I1007 12:33:53.935309  766330 start.go:83] releasing machines lock for "ha-053933-m03", held for 28.033038751s
	I1007 12:33:53.935340  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.935600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.938944  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.939372  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.939401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.942103  766330 out.go:177] * Found network options:
	I1007 12:33:53.943700  766330 out.go:177]   - NO_PROXY=192.168.39.152,192.168.39.227
	W1007 12:33:53.945305  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.945333  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.945354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946191  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946469  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946569  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:33:53.946621  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	W1007 12:33:53.946704  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.946780  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.946900  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:33:53.946926  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.950981  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951020  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951403  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951437  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951491  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951686  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951876  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951902  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952038  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952066  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952209  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952204  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.952359  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:54.197386  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:33:54.205923  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:33:54.206059  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:33:54.226436  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:33:54.226467  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:33:54.226539  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:33:54.247190  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:33:54.263380  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:33:54.263461  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:33:54.280192  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:33:54.297621  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:33:54.421983  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:33:54.595012  766330 docker.go:233] disabling docker service ...
	I1007 12:33:54.595103  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:33:54.611124  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:33:54.625647  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:33:54.766528  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:33:54.902157  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:33:54.917030  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:33:54.939198  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:33:54.939275  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.951699  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:33:54.951792  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.963943  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.975263  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.986454  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:33:54.998449  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.010053  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.029064  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.040536  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:33:55.051384  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:33:55.051443  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:33:55.065668  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:33:55.076166  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:55.212352  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:33:55.312005  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:33:55.312090  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:33:55.318387  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:33:55.318471  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:33:55.322868  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:33:55.367251  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:33:55.367355  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.397971  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.435128  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:33:55.436490  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:33:55.437841  766330 out.go:177]   - env NO_PROXY=192.168.39.152,192.168.39.227
	I1007 12:33:55.439394  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:55.442218  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442572  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:55.442593  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442854  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:33:55.447427  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:55.460437  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:33:55.460787  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:55.461177  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.461238  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.477083  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1007 12:33:55.477627  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.478242  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.478264  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.478601  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.478770  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:33:55.480358  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:55.480665  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.480703  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.497617  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I1007 12:33:55.498208  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.498771  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.498802  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.499144  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.499349  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:55.499537  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.53
	I1007 12:33:55.499550  766330 certs.go:194] generating shared ca certs ...
	I1007 12:33:55.499567  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.499698  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:33:55.499751  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:33:55.499772  766330 certs.go:256] generating profile certs ...
	I1007 12:33:55.499874  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:33:55.499909  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23
	I1007 12:33:55.499931  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.53 192.168.39.254]
	I1007 12:33:55.566679  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 ...
	I1007 12:33:55.566718  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23: {Name:mk9518d7a648a9de4b8c05fe89f1c3f09f2c6a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.566929  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 ...
	I1007 12:33:55.566948  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23: {Name:mkdcb7e0de901ae74037605940d4a487b0fb8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.567053  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:33:55.567210  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:33:55.567369  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:33:55.567391  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:33:55.567411  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:33:55.567431  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:33:55.567450  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:33:55.567469  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:33:55.567488  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:33:55.567506  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:33:55.586158  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:33:55.586279  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:33:55.586335  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:33:55.586352  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:33:55.586387  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:33:55.586425  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:33:55.586458  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:33:55.586517  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:55.586558  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:33:55.586579  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:55.586598  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:33:55.586646  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:55.589684  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590162  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:55.590193  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590365  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:55.590577  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:55.590763  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:55.590948  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:55.666401  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:33:55.672290  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:33:55.685836  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:33:55.691589  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:33:55.704365  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:33:55.709554  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:33:55.723585  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:33:55.728967  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:33:55.742781  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:33:55.747517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:33:55.759055  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:33:55.763953  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:33:55.775294  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:33:55.802739  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:33:55.829606  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:33:55.854203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:33:55.881501  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:33:55.907802  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:33:55.935368  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:33:55.966709  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:33:55.993237  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:33:56.018616  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:33:56.044579  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:33:56.069120  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:33:56.087293  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:33:56.105801  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:33:56.126196  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:33:56.145822  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:33:56.163980  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:33:56.182187  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:33:56.201073  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:33:56.207142  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:33:56.218685  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.223978  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.224097  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.231835  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:33:56.243660  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:33:56.255269  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260456  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260520  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.267451  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:33:56.279865  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:33:56.291556  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296671  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296755  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.303021  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:33:56.314190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:33:56.319184  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:33:56.319253  766330 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1007 12:33:56.319359  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:33:56.319393  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:33:56.319441  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:33:56.337458  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:33:56.337539  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:33:56.337609  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.352182  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:33:56.352262  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:33:56.364932  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:33:56.365107  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.365108  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:56.364948  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:33:56.365318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.365380  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.386729  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:33:56.386794  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:33:56.386811  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:33:56.386844  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:33:56.386813  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.387110  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.420143  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:33:56.420202  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:33:57.371744  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:33:57.382647  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 12:33:57.402832  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:33:57.421823  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:33:57.441482  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:33:57.445627  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:57.459762  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:57.603405  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:57.624431  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:57.624969  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:57.625051  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:57.641787  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I1007 12:33:57.642353  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:57.642903  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:57.642925  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:57.643307  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:57.643533  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:57.643693  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:33:57.643829  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:33:57.643846  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:57.646962  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647481  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:57.647512  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647651  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:57.647823  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:57.647983  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:57.648106  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:57.973692  766330 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:57.973754  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1007 12:34:20.692568  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.718770843s)
	I1007 12:34:20.692609  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:34:21.235276  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m03 minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:34:21.384823  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:34:21.546452  766330 start.go:319] duration metric: took 23.902751753s to joinCluster
	I1007 12:34:21.546537  766330 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:34:21.547030  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:34:21.548080  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:34:21.549612  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:34:21.823190  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:34:21.845870  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:34:21.846263  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:34:21.846360  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:34:21.846701  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:21.846820  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:21.846832  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:21.846844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:21.846854  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:21.850883  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:22.347874  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.347909  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.347923  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.347929  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.351566  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:22.847344  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.847377  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.866723  766330 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1007 12:34:23.347347  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.347375  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.347387  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.347394  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.351929  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:23.847333  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.847355  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.847363  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.847372  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.850896  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:23.851597  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:24.347594  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.347622  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.347633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.347638  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.351365  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:24.847338  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.847389  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.850525  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.347474  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.347501  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.347512  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.347517  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.350876  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.847008  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.847039  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.847047  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.847052  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.850192  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.347863  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.347891  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.347899  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.347903  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.351555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.352073  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:26.847450  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.847477  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.847485  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.847489  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.851359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.347145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.347169  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.347179  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.347185  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.350867  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.847674  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.847701  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.847710  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.847715  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.851381  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.346976  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.347004  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.347016  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.347020  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.350677  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.847299  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.847324  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.847334  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.847342  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.852124  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:28.852851  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:29.347470  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.347495  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.347506  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.347511  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.351169  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:29.847063  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.847088  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.847096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.847101  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.850541  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:30.347314  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.347341  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.347349  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.347354  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.351677  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:30.847295  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.847322  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.847331  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.847337  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.851021  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.347887  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.347917  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.347928  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.347932  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.351855  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.352449  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:31.847880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.847906  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.847914  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.847918  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.851368  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.347251  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.347285  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.347297  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.347304  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.351028  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.847346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.847371  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.847380  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.847385  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.850808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.347425  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.347452  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.347461  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.347465  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.351213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.847937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.847961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.847976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.847981  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.852995  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:33.853973  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:34.347964  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.347989  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.348006  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.348012  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.351982  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:34.847651  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.847676  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.847685  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.847690  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.851757  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.347354  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.347377  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.347386  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.347390  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.351104  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.847711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.847737  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.847748  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.847753  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.858606  766330 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:34:35.859308  766330 node_ready.go:49] node "ha-053933-m03" has status "Ready":"True"
	I1007 12:34:35.859333  766330 node_ready.go:38] duration metric: took 14.012608332s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:35.859345  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:35.859442  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:35.859456  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.859468  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.859474  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.869218  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:34:35.877082  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.877211  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:34:35.877225  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.877235  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.877246  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.881909  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.883332  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.883357  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.883368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.883378  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.888505  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:35.889562  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.889584  766330 pod_ready.go:82] duration metric: took 12.462204ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889599  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889693  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:34:35.889703  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.889714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.889720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.894158  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.894859  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.894878  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.894888  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.894894  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.898314  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.898768  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.898786  766330 pod_ready.go:82] duration metric: took 9.180577ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898799  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898867  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:34:35.898875  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.898882  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.898885  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.903049  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.903727  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.903743  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.903754  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.903761  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.906490  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.907003  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.907073  766330 pod_ready.go:82] duration metric: took 8.251291ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907112  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907213  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:34:35.907222  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.907230  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.907250  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.910128  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.910735  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:35.910749  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.910760  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.910767  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.914012  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.914767  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.914789  766330 pod_ready.go:82] duration metric: took 7.665567ms for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.914802  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:36.048508  766330 request.go:632] Waited for 133.622997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048575  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048580  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.048588  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.048592  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.052571  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.248730  766330 request.go:632] Waited for 195.373798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248827  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248836  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.248844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.248849  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.251932  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.448570  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.448595  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.448605  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.448610  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.452907  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:36.647847  766330 request.go:632] Waited for 194.331001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647936  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647943  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.647951  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.647956  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.651933  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.915705  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.915729  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.915738  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.915742  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.919213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.048315  766330 request.go:632] Waited for 128.338635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048400  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048408  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.048424  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.048429  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.051185  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:37.415988  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.416012  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.416021  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.416026  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.419983  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.448134  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.448158  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.448168  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.448175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.451453  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.915937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.915961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.915971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.915976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.920167  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:37.921049  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.921073  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.921086  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.921093  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.924604  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.925286  766330 pod_ready.go:93] pod "etcd-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:37.925306  766330 pod_ready.go:82] duration metric: took 2.010496086s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:37.925324  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.048769  766330 request.go:632] Waited for 123.357964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048854  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.048866  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.048882  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.052431  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.248516  766330 request.go:632] Waited for 195.362302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248623  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248634  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.248644  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.248651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.252242  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.252762  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.252784  766330 pod_ready.go:82] duration metric: took 327.452579ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.252797  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.447801  766330 request.go:632] Waited for 194.917273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447884  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447889  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.447897  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.447902  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.451491  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.648627  766330 request.go:632] Waited for 196.37134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648716  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.648722  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.648732  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.652823  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:38.653461  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.653480  766330 pod_ready.go:82] duration metric: took 400.67636ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.653490  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.848685  766330 request.go:632] Waited for 195.113793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848879  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.848893  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.848898  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.853139  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:39.048666  766330 request.go:632] Waited for 194.422198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048757  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048765  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.048773  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.048780  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.052403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.052899  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.052921  766330 pod_ready.go:82] duration metric: took 399.423284ms for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.052935  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.248381  766330 request.go:632] Waited for 195.347943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248463  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248470  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.248479  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.248532  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.252304  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.448654  766330 request.go:632] Waited for 195.421963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448774  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448781  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.448789  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.448794  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.452418  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.452966  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.452987  766330 pod_ready.go:82] duration metric: took 400.045067ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.452997  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.648075  766330 request.go:632] Waited for 195.002627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648177  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648188  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.648196  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.648203  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.651698  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.848035  766330 request.go:632] Waited for 195.367175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848150  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848170  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.848184  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.848192  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.851573  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.852402  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.852421  766330 pod_ready.go:82] duration metric: took 399.417648ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.852432  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.048539  766330 request.go:632] Waited for 196.032961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048627  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048633  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.048641  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.048647  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.052288  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.248694  766330 request.go:632] Waited for 195.442218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248809  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248819  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.248829  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.248839  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.252540  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.253313  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.253337  766330 pod_ready.go:82] duration metric: took 400.899295ms for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.253349  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.448782  766330 request.go:632] Waited for 195.339385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448860  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.448879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.448899  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.452366  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.648273  766330 request.go:632] Waited for 194.918691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648352  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.648361  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.648367  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.651885  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.652427  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.652452  766330 pod_ready.go:82] duration metric: took 399.095883ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.652465  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.848579  766330 request.go:632] Waited for 196.00042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848642  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848648  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.848657  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.848660  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.852403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.048483  766330 request.go:632] Waited for 195.416905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048561  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048566  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.048574  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.048582  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.052281  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.052757  766330 pod_ready.go:93] pod "kube-proxy-dqqj6" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.052775  766330 pod_ready.go:82] duration metric: took 400.298296ms for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.052785  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.247821  766330 request.go:632] Waited for 194.952122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247915  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247920  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.247942  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.247958  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.251753  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.447806  766330 request.go:632] Waited for 195.292745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447871  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447876  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.447883  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.447887  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.451374  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.452013  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.452035  766330 pod_ready.go:82] duration metric: took 399.242268ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.452048  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.648060  766330 request.go:632] Waited for 195.92136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648167  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.648176  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.648181  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.652281  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:41.848221  766330 request.go:632] Waited for 195.408754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848307  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848321  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.848329  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.848332  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.851502  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.852147  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.852173  766330 pod_ready.go:82] duration metric: took 400.115446ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.852186  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.048319  766330 request.go:632] Waited for 196.021861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048415  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048421  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.048429  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.048434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.051904  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.247954  766330 request.go:632] Waited for 195.30672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248042  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248048  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.248056  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.248060  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.251799  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.252357  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.252378  766330 pod_ready.go:82] duration metric: took 400.185892ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.252389  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.448570  766330 request.go:632] Waited for 196.083361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448644  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448649  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.448658  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.448665  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.452279  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.648464  766330 request.go:632] Waited for 195.372097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648558  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648567  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.648575  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.648587  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.651837  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.652442  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.652462  766330 pod_ready.go:82] duration metric: took 400.066938ms for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.652473  766330 pod_ready.go:39] duration metric: took 6.79311586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:42.652490  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:34:42.652549  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:34:42.669655  766330 api_server.go:72] duration metric: took 21.123075945s to wait for apiserver process to appear ...
	I1007 12:34:42.669686  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:34:42.669721  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:34:42.677436  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:34:42.677526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:34:42.677533  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.677545  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.677556  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.678540  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:34:42.678609  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:34:42.678628  766330 api_server.go:131] duration metric: took 8.935272ms to wait for apiserver health ...
	I1007 12:34:42.678643  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:34:42.848087  766330 request.go:632] Waited for 169.34722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848178  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848184  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.848192  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.848197  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.854471  766330 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:34:42.861098  766330 system_pods.go:59] 24 kube-system pods found
	I1007 12:34:42.861133  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:42.861137  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:42.861141  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:42.861145  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:42.861148  766330 system_pods.go:61] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:42.861151  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:42.861154  766330 system_pods.go:61] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:42.861157  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:42.861160  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:42.861163  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:42.861166  766330 system_pods.go:61] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:42.861170  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:42.861177  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:42.861180  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:42.861182  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:42.861185  766330 system_pods.go:61] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:42.861189  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:42.861191  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:42.861194  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:42.861197  766330 system_pods.go:61] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:42.861200  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:42.861203  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:42.861206  766330 system_pods.go:61] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:42.861212  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:42.861221  766330 system_pods.go:74] duration metric: took 182.569158ms to wait for pod list to return data ...
	I1007 12:34:42.861229  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:34:43.048753  766330 request.go:632] Waited for 187.419479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048837  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.048875  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.048879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.053383  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:43.053574  766330 default_sa.go:45] found service account: "default"
	I1007 12:34:43.053596  766330 default_sa.go:55] duration metric: took 192.357019ms for default service account to be created ...
	I1007 12:34:43.053609  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:34:43.248358  766330 request.go:632] Waited for 194.661822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248434  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248457  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.248468  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.248480  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.254368  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:43.261575  766330 system_pods.go:86] 24 kube-system pods found
	I1007 12:34:43.261611  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:43.261617  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:43.261621  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:43.261625  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:43.261628  766330 system_pods.go:89] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:43.261632  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:43.261636  766330 system_pods.go:89] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:43.261641  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:43.261646  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:43.261651  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:43.261656  766330 system_pods.go:89] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:43.261665  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:43.261670  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:43.261679  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:43.261684  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:43.261689  766330 system_pods.go:89] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:43.261704  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:43.261709  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:43.261713  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:43.261719  766330 system_pods.go:89] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:43.261722  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:43.261730  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:43.261736  766330 system_pods.go:89] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:43.261739  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:43.261746  766330 system_pods.go:126] duration metric: took 208.130933ms to wait for k8s-apps to be running ...
	I1007 12:34:43.261758  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:34:43.261819  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:34:43.278366  766330 system_svc.go:56] duration metric: took 16.59381ms WaitForService to wait for kubelet
	I1007 12:34:43.278406  766330 kubeadm.go:582] duration metric: took 21.731835186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:34:43.278428  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:34:43.447722  766330 request.go:632] Waited for 169.191028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447802  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447807  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.447815  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.447822  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.451521  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:43.453111  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453136  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453151  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453154  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453158  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453161  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453165  766330 node_conditions.go:105] duration metric: took 174.732727ms to run NodePressure ...
	I1007 12:34:43.453176  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:34:43.453200  766330 start.go:255] writing updated cluster config ...
	I1007 12:34:43.453638  766330 ssh_runner.go:195] Run: rm -f paused
	I1007 12:34:43.510074  766330 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:34:43.512318  766330 out.go:177] * Done! kubectl is now configured to use "ha-053933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.394488085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304703394462083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73eadf79-33af-4785-9dd2-60039a5b342d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.395129019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf2e92ac-9a7b-424d-bda0-eb18ed2e5a1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.395187568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf2e92ac-9a7b-424d-bda0-eb18ed2e5a1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.395427459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf2e92ac-9a7b-424d-bda0-eb18ed2e5a1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.435877660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee017e2b-1d74-4c77-a17c-66c926355698 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.435965632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee017e2b-1d74-4c77-a17c-66c926355698 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.437360034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fde19fd5-caa8-4b37-adc4-695507de03f7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.438228979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304703438195214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fde19fd5-caa8-4b37-adc4-695507de03f7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.438931672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3593ce5c-6e3b-4e72-81bc-9c583833cae3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.439004289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3593ce5c-6e3b-4e72-81bc-9c583833cae3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.439282383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3593ce5c-6e3b-4e72-81bc-9c583833cae3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.479491253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1961ef86-77dd-4fba-857d-0cd027560f87 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.479609115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1961ef86-77dd-4fba-857d-0cd027560f87 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.481225982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ac9b7f1-c6a3-4c19-9ccc-998d569f1673 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.482022384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304703481990466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ac9b7f1-c6a3-4c19-9ccc-998d569f1673 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.482490592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d200ebd-e28a-4bb6-ae01-56a33ab9cbdf name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.482599968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d200ebd-e28a-4bb6-ae01-56a33ab9cbdf name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.482817861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d200ebd-e28a-4bb6-ae01-56a33ab9cbdf name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.522963100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8d31174-40b2-442b-9238-ac32f3aca661 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.523035539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8d31174-40b2-442b-9238-ac32f3aca661 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.525204245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=928c164f-5859-4123-8559-57960db4797f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.525711333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304703525683729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=928c164f-5859-4123-8559-57960db4797f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.526318363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=574fea1f-5168-45cf-9c5b-661742453be3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.526373451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=574fea1f-5168-45cf-9c5b-661742453be3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:23 ha-053933 crio[664]: time="2024-10-07 12:38:23.526686770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=574fea1f-5168-45cf-9c5b-661742453be3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ba824fcefba6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e189556a18c92       busybox-7dff88458-gx88f
	2867817e1f480       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   0d58c208fea1c       coredns-7c65d6cfc9-tqtzn
	35044c701c165       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   89c61a059649d       coredns-7c65d6cfc9-sj44v
	3da0371dd7287       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   8d79b5c178f5d       storage-provisioner
	65adc93f12fb7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1546c9281ca68       kindnet-4gmn6
	aea74cdff9eee       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   6bb33ce6417a6       kube-proxy-7bwxp
	e756202203ed3       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   0e8b4b3150e40       kube-vip-ha-053933
	f190ed8ea3a7d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   228ca0c55468f       kube-controller-manager-ha-053933
	096488f001092       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cd767df10cb41       kube-scheduler-ha-053933
	fe11729317aca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   90cea5dfb2e91       etcd-ha-053933
	a23f58b62cf7a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   706ba9f92d690       kube-apiserver-ha-053933
	
	
	==> coredns [2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4] <==
	[INFO] 10.244.1.2:56331 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237909s
	[INFO] 10.244.1.2:36489 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015207s
	[INFO] 10.244.2.2:39298 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129286s
	[INFO] 10.244.2.2:47065 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177192s
	[INFO] 10.244.2.2:34384 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120996s
	[INFO] 10.244.2.2:55346 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176087s
	[INFO] 10.244.0.4:46975 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114471s
	[INFO] 10.244.0.4:58945 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225792s
	[INFO] 10.244.0.4:43259 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067959s
	[INFO] 10.244.0.4:34928 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001509847s
	[INFO] 10.244.0.4:46991 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079782s
	[INFO] 10.244.0.4:59761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084499s
	[INFO] 10.244.1.2:49251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140128s
	[INFO] 10.244.1.2:33825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172303s
	[INFO] 10.244.2.2:58538 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185922s
	[INFO] 10.244.0.4:44359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137041s
	[INFO] 10.244.0.4:58301 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099102s
	[INFO] 10.244.1.2:36803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222211s
	[INFO] 10.244.1.2:41006 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207899s
	[INFO] 10.244.1.2:43041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129649s
	[INFO] 10.244.2.2:45405 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175032s
	[INFO] 10.244.2.2:36952 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143195s
	[INFO] 10.244.0.4:39376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106075s
	[INFO] 10.244.0.4:60091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121535s
	[INFO] 10.244.0.4:37488 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084395s
	
	
	==> coredns [35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5] <==
	[INFO] 10.244.2.2:33316 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000351738s
	[INFO] 10.244.2.2:40861 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001441898s
	[INFO] 10.244.0.4:57140 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000078781s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135026s
	[INFO] 10.244.1.2:54055 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005238284s
	[INFO] 10.244.1.2:56033 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000250432s
	[INFO] 10.244.1.2:35801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184148s
	[INFO] 10.244.1.2:59610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190826s
	[INFO] 10.244.2.2:33184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859772s
	[INFO] 10.244.2.2:46345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160195s
	[INFO] 10.244.2.2:58454 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001735681s
	[INFO] 10.244.2.2:51235 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213117s
	[INFO] 10.244.0.4:40361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002214882s
	[INFO] 10.244.0.4:35596 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091564s
	[INFO] 10.244.1.2:54454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176281s
	[INFO] 10.244.1.2:54571 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089015s
	[INFO] 10.244.2.2:54102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258038s
	[INFO] 10.244.2.2:51160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106978s
	[INFO] 10.244.2.2:57393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167598s
	[INFO] 10.244.0.4:39801 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084483s
	[INFO] 10.244.0.4:60729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097532s
	[INFO] 10.244.1.2:36580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164463s
	[INFO] 10.244.2.2:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036575s
	[INFO] 10.244.2.2:54375 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256014s
	[INFO] 10.244.0.4:46032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082269s
	
	
	==> describe nodes <==
	Name:               ha-053933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-053933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 081ddd3e0f204426846b528e120c10c6
	  System UUID:                081ddd3e-0f20-4426-846b-528e120c10c6
	  Boot ID:                    1dece28a-ef9e-423f-833d-5ccfd814e28e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gx88f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-sj44v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-7c65d6cfc9-tqtzn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-053933                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-4gmn6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-053933             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-053933    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-proxy-7bwxp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-053933             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-vip-ha-053933                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m13s  kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-053933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-053933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-053933 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  NodeReady                6m3s   kubelet          Node ha-053933 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	
	
	Name:               ha-053933-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:33:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:35:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-053933-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea0094a740a940c483867f94cc6c27db
	  System UUID:                ea0094a7-40a9-40c4-8386-7f94cc6c27db
	  Boot ID:                    c270f988-c787-4383-b26b-ec82a3153fd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cll72                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-053933-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-cx4hw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-053933-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-053933-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-proxy-zvblz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-053933-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-053933-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m21s                  cidrAllocator    Node ha-053933-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-053933-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-053933-m02 status is now: NodeNotReady
	
	
	Name:               ha-053933-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-053933-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2c62335e69d4ef7b1309ece17e10873
	  System UUID:                c2c62335-e69d-4ef7-b130-9ece17e10873
	  Boot ID:                    2e17b6e0-0617-4bea-8b9d-8cd903a9fcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvw9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-053933-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m4s
	  kube-system                 kindnet-6tzch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-apiserver-ha-053933-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-053933-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-dqqj6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-scheduler-ha-053933-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-053933-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m6s                 cidrAllocator    Node ha-053933-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-053933-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	
	
	Name:               ha-053933-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_35_18_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-053933-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 114115be4a5e4a82bdbd4b86727c66b7
	  System UUID:                114115be-4a5e-4a82-bdbd-4b86727c66b7
	  Boot ID:                    dba1fc43-1911-4c9b-b57d-d3bef52a7eef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-874mt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m5s
	  kube-system                 kube-proxy-wmjjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)  kubelet          Node ha-053933-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m5s                 cidrAllocator    Node ha-053933-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-053933-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050548] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040088] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.846047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.647512] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.009818] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056187] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087371] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186817] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.108690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.296967] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.247594] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.068909] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.901650] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.502104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 12:32] kauditd_printk_skb: 51 callbacks suppressed
	[  +1.286659] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +5.238921] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.342023] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 12:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866] <==
	{"level":"warn","ts":"2024-10-07T12:38:23.780842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.809915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.817123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.821302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.833413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.840778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.846915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.850981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.855850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.864066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.874205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.880393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.880615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.884393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.888354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.898420Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.905790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.912614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.912791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.918042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.923269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.929032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.937482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.943788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:23.980723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:38:24 up 7 min,  0 users,  load average: 0.17, 0.17, 0.08
	Linux ha-053933 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c] <==
	I1007 12:37:50.810872       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:00.814625       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:00.814833       1 main.go:299] handling current node
	I1007 12:38:00.814970       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:00.814985       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:00.815723       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:00.815798       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:00.815998       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:00.816057       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:10.808104       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:10.808153       1 main.go:299] handling current node
	I1007 12:38:10.808168       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:10.808173       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:10.808359       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:10.808385       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:10.808430       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:10.808435       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:20.812716       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:20.812802       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:20.812961       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:20.812985       1 main.go:299] handling current node
	I1007 12:38:20.813004       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:20.813010       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:20.813053       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:20.813073       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38] <==
	I1007 12:32:02.949969       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1007 12:32:02.963249       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I1007 12:32:02.964729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:32:02.971941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:32:03.069138       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 12:32:03.964342       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 12:32:03.987254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:32:04.095813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:32:08.516111       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1007 12:32:08.611991       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1007 12:34:48.798901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37568: use of closed network connection
	E1007 12:34:49.000124       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37592: use of closed network connection
	E1007 12:34:49.206162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37608: use of closed network connection
	E1007 12:34:49.419763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37632: use of closed network connection
	E1007 12:34:49.618246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37650: use of closed network connection
	E1007 12:34:49.830698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37678: use of closed network connection
	E1007 12:34:50.014306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37698: use of closed network connection
	E1007 12:34:50.203031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37722: use of closed network connection
	E1007 12:34:50.399836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37736: use of closed network connection
	E1007 12:34:50.721906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37754: use of closed network connection
	E1007 12:34:50.916874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37778: use of closed network connection
	E1007 12:34:51.129244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37784: use of closed network connection
	E1007 12:34:51.331880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37804: use of closed network connection
	E1007 12:34:51.534234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37816: use of closed network connection
	E1007 12:34:51.740225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37836: use of closed network connection
	
	
	==> kube-controller-manager [f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255] <==
	E1007 12:35:18.261020       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-053933-m04': failed to patch node CIDR: Node \"ha-053933-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1007 12:35:18.261043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.267395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.419356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.886255       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.927634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m03"
	I1007 12:35:21.910317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.213570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.317164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.867893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.869105       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-053933-m04"
	I1007 12:35:22.944595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:28.233385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.043630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.044602       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:35:36.061944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.755307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:48.386926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:36:37.247180       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:36:37.247992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.283173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.296003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.649837ms"
	I1007 12:36:37.296097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.311µs"
	I1007 12:36:37.968993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:42.526972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	
	
	==> kube-proxy [aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:32:09.744772       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:32:09.779605       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	E1007 12:32:09.779729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:32:09.875780       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:32:09.875870       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:32:09.875896       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:32:09.899096       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:32:09.900043       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:32:09.900063       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:32:09.904977       1 config.go:199] "Starting service config controller"
	I1007 12:32:09.905625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:32:09.905998       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:32:09.906007       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:32:09.909098       1 config.go:328] "Starting node config controller"
	I1007 12:32:09.912651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:32:10.006461       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:32:10.006556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:32:10.013752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525] <==
	W1007 12:32:02.522045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:32:02.522209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:32:02.691725       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:32:02.691861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 12:32:04.967169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 12:35:18.155212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.155405       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 055fbe2f-0b88-4875-9ee5-5672731cf7e9(kube-system/kindnet-tskmj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tskmj"
	E1007 12:35:18.155442       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-tskmj"
	I1007 12:35:18.155464       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.234037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.235784       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17a817ae-69ea-44f0-907d-a935057c340a(kube-system/kube-proxy-hkx4p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hkx4p"
	E1007 12:35:18.235899       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-hkx4p"
	I1007 12:35:18.235923       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.234494       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.237640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fe0255b5-5ad9-4633-a28d-ecdf64a0267c(kube-system/kindnet-gbqh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gbqh5"
	E1007 12:35:18.237709       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-gbqh5"
	I1007 12:35:18.237727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.300436       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300714       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71fc4648-ffa7-4b9c-b3be-35c98da41798(kube-system/kube-proxy-wmjjq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wmjjq"
	E1007 12:35:18.300906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-wmjjq"
	I1007 12:35:18.301040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	E1007 12:35:18.302463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cbe2af3e-e15d-4855-b598-450159e1b100(kube-system/kindnet-874mt) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-874mt"
	E1007 12:35:18.302498       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-874mt"
	I1007 12:35:18.302596       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	
	
	==> kubelet <==
	Oct 07 12:37:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:37:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248076    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248142    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250603    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250995    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252717    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252763    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.255287    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.257649    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.260273    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.261117    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264814    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264871    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.151993    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266021    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266073    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267592    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267615    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:24 ha-053933 kubelet[1318]: E1007 12:38:24.271756    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304704271343356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:24 ha-053933 kubelet[1318]: E1007 12:38:24.271782    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304704271343356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-053933 -n ha-053933
helpers_test.go:261: (dbg) Run:  kubectl --context ha-053933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr: (3.971232626s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-053933 -n ha-053933
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 logs -n 25: (1.468243605s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m03_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m04 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp testdata/cp-test.txt                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m03 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-053933 node stop m02 -v=7                                                   | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-053933 node start m02 -v=7                                                  | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:31:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:31:18.148064  766330 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:31:18.148178  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148182  766330 out.go:358] Setting ErrFile to fd 2...
	I1007 12:31:18.148187  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148357  766330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:31:18.148967  766330 out.go:352] Setting JSON to false
	I1007 12:31:18.149958  766330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8027,"bootTime":1728296251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:31:18.150102  766330 start.go:139] virtualization: kvm guest
	I1007 12:31:18.152485  766330 out.go:177] * [ha-053933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:31:18.154248  766330 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:31:18.154296  766330 notify.go:220] Checking for updates...
	I1007 12:31:18.157253  766330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:31:18.159046  766330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:31:18.160370  766330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.161706  766330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:31:18.163112  766330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:31:18.164841  766330 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:31:18.202110  766330 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:31:18.203531  766330 start.go:297] selected driver: kvm2
	I1007 12:31:18.203547  766330 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:31:18.203562  766330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:31:18.204518  766330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.204603  766330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:31:18.220705  766330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:31:18.220766  766330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:31:18.221021  766330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:31:18.221059  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:18.221106  766330 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:31:18.221116  766330 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:31:18.221169  766330 start.go:340] cluster config:
	{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:18.221279  766330 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.223403  766330 out.go:177] * Starting "ha-053933" primary control-plane node in "ha-053933" cluster
	I1007 12:31:18.224688  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:18.224749  766330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:31:18.224761  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:31:18.224844  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:31:18.224857  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:31:18.225188  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:18.225228  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json: {Name:mk42211822a040c72189a8c96b9ffb20916f09bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:18.225385  766330 start.go:360] acquireMachinesLock for ha-053933: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:31:18.225414  766330 start.go:364] duration metric: took 16.211µs to acquireMachinesLock for "ha-053933"
	I1007 12:31:18.225431  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:31:18.225482  766330 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:31:18.227000  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:31:18.227165  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:18.227217  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:18.241971  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1007 12:31:18.242468  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:18.243060  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:31:18.243086  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:18.243440  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:18.243664  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:18.243802  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:18.243958  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:31:18.243992  766330 client.go:168] LocalClient.Create starting
	I1007 12:31:18.244024  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:31:18.244058  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244073  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244137  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:31:18.244157  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244173  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244190  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:31:18.244198  766330 main.go:141] libmachine: (ha-053933) Calling .PreCreateCheck
	I1007 12:31:18.244526  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:18.244944  766330 main.go:141] libmachine: Creating machine...
	I1007 12:31:18.244959  766330 main.go:141] libmachine: (ha-053933) Calling .Create
	I1007 12:31:18.245125  766330 main.go:141] libmachine: (ha-053933) Creating KVM machine...
	I1007 12:31:18.246330  766330 main.go:141] libmachine: (ha-053933) DBG | found existing default KVM network
	I1007 12:31:18.247162  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.246970  766353 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1007 12:31:18.247250  766330 main.go:141] libmachine: (ha-053933) DBG | created network xml: 
	I1007 12:31:18.247277  766330 main.go:141] libmachine: (ha-053933) DBG | <network>
	I1007 12:31:18.247291  766330 main.go:141] libmachine: (ha-053933) DBG |   <name>mk-ha-053933</name>
	I1007 12:31:18.247307  766330 main.go:141] libmachine: (ha-053933) DBG |   <dns enable='no'/>
	I1007 12:31:18.247318  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247331  766330 main.go:141] libmachine: (ha-053933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:31:18.247341  766330 main.go:141] libmachine: (ha-053933) DBG |     <dhcp>
	I1007 12:31:18.247353  766330 main.go:141] libmachine: (ha-053933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:31:18.247366  766330 main.go:141] libmachine: (ha-053933) DBG |     </dhcp>
	I1007 12:31:18.247382  766330 main.go:141] libmachine: (ha-053933) DBG |   </ip>
	I1007 12:31:18.247394  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247403  766330 main.go:141] libmachine: (ha-053933) DBG | </network>
	I1007 12:31:18.247414  766330 main.go:141] libmachine: (ha-053933) DBG | 
	I1007 12:31:18.252550  766330 main.go:141] libmachine: (ha-053933) DBG | trying to create private KVM network mk-ha-053933 192.168.39.0/24...
	I1007 12:31:18.323012  766330 main.go:141] libmachine: (ha-053933) DBG | private KVM network mk-ha-053933 192.168.39.0/24 created
	I1007 12:31:18.323051  766330 main.go:141] libmachine: (ha-053933) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.323065  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.322988  766353 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.323078  766330 main.go:141] libmachine: (ha-053933) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:31:18.323220  766330 main.go:141] libmachine: (ha-053933) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:31:18.600250  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.600066  766353 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa...
	I1007 12:31:18.865018  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864813  766353 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk...
	I1007 12:31:18.865057  766330 main.go:141] libmachine: (ha-053933) DBG | Writing magic tar header
	I1007 12:31:18.865071  766330 main.go:141] libmachine: (ha-053933) DBG | Writing SSH key tar header
	I1007 12:31:18.865083  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864941  766353 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.865103  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933
	I1007 12:31:18.865116  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 (perms=drwx------)
	I1007 12:31:18.865126  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:31:18.865135  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.865141  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:31:18.865149  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:31:18.865159  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:31:18.865166  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:31:18.865180  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home
	I1007 12:31:18.865192  766330 main.go:141] libmachine: (ha-053933) DBG | Skipping /home - not owner
	I1007 12:31:18.865206  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:31:18.865221  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:31:18.865229  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:31:18.865238  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:31:18.865245  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:18.866439  766330 main.go:141] libmachine: (ha-053933) define libvirt domain using xml: 
	I1007 12:31:18.866466  766330 main.go:141] libmachine: (ha-053933) <domain type='kvm'>
	I1007 12:31:18.866476  766330 main.go:141] libmachine: (ha-053933)   <name>ha-053933</name>
	I1007 12:31:18.866483  766330 main.go:141] libmachine: (ha-053933)   <memory unit='MiB'>2200</memory>
	I1007 12:31:18.866492  766330 main.go:141] libmachine: (ha-053933)   <vcpu>2</vcpu>
	I1007 12:31:18.866503  766330 main.go:141] libmachine: (ha-053933)   <features>
	I1007 12:31:18.866510  766330 main.go:141] libmachine: (ha-053933)     <acpi/>
	I1007 12:31:18.866520  766330 main.go:141] libmachine: (ha-053933)     <apic/>
	I1007 12:31:18.866530  766330 main.go:141] libmachine: (ha-053933)     <pae/>
	I1007 12:31:18.866546  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866569  766330 main.go:141] libmachine: (ha-053933)   </features>
	I1007 12:31:18.866589  766330 main.go:141] libmachine: (ha-053933)   <cpu mode='host-passthrough'>
	I1007 12:31:18.866598  766330 main.go:141] libmachine: (ha-053933)   
	I1007 12:31:18.866607  766330 main.go:141] libmachine: (ha-053933)   </cpu>
	I1007 12:31:18.866617  766330 main.go:141] libmachine: (ha-053933)   <os>
	I1007 12:31:18.866624  766330 main.go:141] libmachine: (ha-053933)     <type>hvm</type>
	I1007 12:31:18.866630  766330 main.go:141] libmachine: (ha-053933)     <boot dev='cdrom'/>
	I1007 12:31:18.866636  766330 main.go:141] libmachine: (ha-053933)     <boot dev='hd'/>
	I1007 12:31:18.866641  766330 main.go:141] libmachine: (ha-053933)     <bootmenu enable='no'/>
	I1007 12:31:18.866647  766330 main.go:141] libmachine: (ha-053933)   </os>
	I1007 12:31:18.866652  766330 main.go:141] libmachine: (ha-053933)   <devices>
	I1007 12:31:18.866659  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='cdrom'>
	I1007 12:31:18.866666  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/boot2docker.iso'/>
	I1007 12:31:18.866673  766330 main.go:141] libmachine: (ha-053933)       <target dev='hdc' bus='scsi'/>
	I1007 12:31:18.866678  766330 main.go:141] libmachine: (ha-053933)       <readonly/>
	I1007 12:31:18.866683  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866691  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='disk'>
	I1007 12:31:18.866702  766330 main.go:141] libmachine: (ha-053933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:31:18.866711  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk'/>
	I1007 12:31:18.866722  766330 main.go:141] libmachine: (ha-053933)       <target dev='hda' bus='virtio'/>
	I1007 12:31:18.866731  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866737  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866745  766330 main.go:141] libmachine: (ha-053933)       <source network='mk-ha-053933'/>
	I1007 12:31:18.866749  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866755  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866759  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866768  766330 main.go:141] libmachine: (ha-053933)       <source network='default'/>
	I1007 12:31:18.866775  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866780  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866786  766330 main.go:141] libmachine: (ha-053933)     <serial type='pty'>
	I1007 12:31:18.866791  766330 main.go:141] libmachine: (ha-053933)       <target port='0'/>
	I1007 12:31:18.866798  766330 main.go:141] libmachine: (ha-053933)     </serial>
	I1007 12:31:18.866802  766330 main.go:141] libmachine: (ha-053933)     <console type='pty'>
	I1007 12:31:18.866810  766330 main.go:141] libmachine: (ha-053933)       <target type='serial' port='0'/>
	I1007 12:31:18.866821  766330 main.go:141] libmachine: (ha-053933)     </console>
	I1007 12:31:18.866827  766330 main.go:141] libmachine: (ha-053933)     <rng model='virtio'>
	I1007 12:31:18.866834  766330 main.go:141] libmachine: (ha-053933)       <backend model='random'>/dev/random</backend>
	I1007 12:31:18.866840  766330 main.go:141] libmachine: (ha-053933)     </rng>
	I1007 12:31:18.866844  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866850  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866855  766330 main.go:141] libmachine: (ha-053933)   </devices>
	I1007 12:31:18.866860  766330 main.go:141] libmachine: (ha-053933) </domain>
	I1007 12:31:18.866868  766330 main.go:141] libmachine: (ha-053933) 
	I1007 12:31:18.871598  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:91:b8:36 in network default
	I1007 12:31:18.872268  766330 main.go:141] libmachine: (ha-053933) Ensuring networks are active...
	I1007 12:31:18.872288  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:18.873069  766330 main.go:141] libmachine: (ha-053933) Ensuring network default is active
	I1007 12:31:18.873363  766330 main.go:141] libmachine: (ha-053933) Ensuring network mk-ha-053933 is active
	I1007 12:31:18.873853  766330 main.go:141] libmachine: (ha-053933) Getting domain xml...
	I1007 12:31:18.874562  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:19.211616  766330 main.go:141] libmachine: (ha-053933) Waiting to get IP...
	I1007 12:31:19.212423  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.212778  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.212812  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.212764  766353 retry.go:31] will retry after 226.747121ms: waiting for machine to come up
	I1007 12:31:19.441331  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.441786  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.441837  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.441730  766353 retry.go:31] will retry after 274.527206ms: waiting for machine to come up
	I1007 12:31:19.718508  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.719027  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.719064  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.718969  766353 retry.go:31] will retry after 356.880394ms: waiting for machine to come up
	I1007 12:31:20.077626  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.078112  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.078145  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.078091  766353 retry.go:31] will retry after 415.686035ms: waiting for machine to come up
	I1007 12:31:20.495868  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.496297  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.496328  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.496232  766353 retry.go:31] will retry after 565.036299ms: waiting for machine to come up
	I1007 12:31:21.062533  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.063181  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.063212  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.063112  766353 retry.go:31] will retry after 934.304139ms: waiting for machine to come up
	I1007 12:31:21.999277  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.999729  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.999763  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.999684  766353 retry.go:31] will retry after 862.178533ms: waiting for machine to come up
	I1007 12:31:22.863123  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:22.863626  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:22.863658  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:22.863574  766353 retry.go:31] will retry after 1.201609733s: waiting for machine to come up
	I1007 12:31:24.066671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:24.067072  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:24.067104  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:24.067015  766353 retry.go:31] will retry after 1.419758916s: waiting for machine to come up
	I1007 12:31:25.488770  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:25.489216  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:25.489240  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:25.489182  766353 retry.go:31] will retry after 2.248635623s: waiting for machine to come up
	I1007 12:31:27.740776  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:27.741277  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:27.741301  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:27.741240  766353 retry.go:31] will retry after 1.919055927s: waiting for machine to come up
	I1007 12:31:29.662363  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:29.662857  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:29.663141  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:29.662878  766353 retry.go:31] will retry after 3.284332028s: waiting for machine to come up
	I1007 12:31:32.951614  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:32.952006  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:32.952134  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:32.951952  766353 retry.go:31] will retry after 3.413281695s: waiting for machine to come up
	I1007 12:31:36.369285  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:36.369674  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:36.369704  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:36.369624  766353 retry.go:31] will retry after 5.240968669s: waiting for machine to come up
	I1007 12:31:41.615028  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615539  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has current primary IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615555  766330 main.go:141] libmachine: (ha-053933) Found IP for machine: 192.168.39.152
	I1007 12:31:41.615563  766330 main.go:141] libmachine: (ha-053933) Reserving static IP address...
	I1007 12:31:41.615914  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "ha-053933", mac: "52:54:00:7e:91:1b", ip: "192.168.39.152"} in network mk-ha-053933
	I1007 12:31:41.698423  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:41.698453  766330 main.go:141] libmachine: (ha-053933) Reserved static IP address: 192.168.39.152
	I1007 12:31:41.698466  766330 main.go:141] libmachine: (ha-053933) Waiting for SSH to be available...
	I1007 12:31:41.701233  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.701575  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933
	I1007 12:31:41.701604  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:7e:91:1b
	I1007 12:31:41.701733  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:41.701762  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:41.701811  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:41.701844  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:41.701865  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:41.705812  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:31:41.705841  766330 main.go:141] libmachine: (ha-053933) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:31:41.705848  766330 main.go:141] libmachine: (ha-053933) DBG | command : exit 0
	I1007 12:31:41.705853  766330 main.go:141] libmachine: (ha-053933) DBG | err     : exit status 255
	I1007 12:31:41.705861  766330 main.go:141] libmachine: (ha-053933) DBG | output  : 
	I1007 12:31:44.706593  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:44.709072  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709617  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.709649  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709785  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:44.709814  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:44.709843  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:44.709856  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:44.709871  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:44.834399  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: <nil>: 
	I1007 12:31:44.834682  766330 main.go:141] libmachine: (ha-053933) KVM machine creation complete!
	I1007 12:31:44.834978  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:44.835619  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.835838  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.836043  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:31:44.836062  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:31:44.837184  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:31:44.837198  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:31:44.837203  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:31:44.837209  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.839398  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839807  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.839830  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839939  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.840108  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840281  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840429  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.840654  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.840918  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.840931  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:31:44.945582  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:44.945632  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:31:44.945644  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.948258  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948719  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.948754  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948921  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.949136  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949341  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949504  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.949690  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.949946  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.949963  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:31:45.055227  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:31:45.055350  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:31:45.055364  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:31:45.055378  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055638  766330 buildroot.go:166] provisioning hostname "ha-053933"
	I1007 12:31:45.055680  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055865  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.058671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059121  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.059156  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059299  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.059582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059753  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059896  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.060046  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.060230  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.060242  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933 && echo "ha-053933" | sudo tee /etc/hostname
	I1007 12:31:45.177180  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:31:45.177214  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.180205  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180610  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.180640  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.181104  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181275  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181434  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.181657  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.181837  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.181854  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:31:45.296167  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:45.296213  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:31:45.296262  766330 buildroot.go:174] setting up certificates
	I1007 12:31:45.296275  766330 provision.go:84] configureAuth start
	I1007 12:31:45.296287  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.296598  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.299370  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299721  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.299769  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.302528  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.302981  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.303013  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.303173  766330 provision.go:143] copyHostCerts
	I1007 12:31:45.303222  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303263  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:31:45.303285  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303361  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:31:45.303500  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303523  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:31:45.303528  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303559  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:31:45.303616  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303633  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:31:45.303637  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303657  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:31:45.303708  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933 san=[127.0.0.1 192.168.39.152 ha-053933 localhost minikube]
	I1007 12:31:45.422772  766330 provision.go:177] copyRemoteCerts
	I1007 12:31:45.422847  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:31:45.422884  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.426109  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426432  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.426461  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426620  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.426796  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.426987  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.427121  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.508256  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:31:45.508354  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:31:45.535023  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:31:45.535097  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:31:45.561047  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:31:45.561146  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:31:45.586470  766330 provision.go:87] duration metric: took 290.178076ms to configureAuth
	I1007 12:31:45.586509  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:31:45.586752  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:45.586838  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.589503  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.589873  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.589917  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.590215  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.590402  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590554  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590703  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.590899  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.591142  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.591160  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:31:45.816081  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:31:45.816125  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:31:45.816137  766330 main.go:141] libmachine: (ha-053933) Calling .GetURL
	I1007 12:31:45.817540  766330 main.go:141] libmachine: (ha-053933) DBG | Using libvirt version 6000000
	I1007 12:31:45.820289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820694  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.820725  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820851  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:31:45.820871  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:31:45.820882  766330 client.go:171] duration metric: took 27.576881663s to LocalClient.Create
	I1007 12:31:45.820914  766330 start.go:167] duration metric: took 27.57695761s to libmachine.API.Create "ha-053933"
	I1007 12:31:45.820939  766330 start.go:293] postStartSetup for "ha-053933" (driver="kvm2")
	I1007 12:31:45.820955  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:31:45.820986  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:45.821218  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:31:45.821261  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.823471  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.823791  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.823834  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.824015  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.824234  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.824403  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.824535  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.905405  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:31:45.910330  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:31:45.910363  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:31:45.910424  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:31:45.910498  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:31:45.910509  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:31:45.910617  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:31:45.921262  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:45.947335  766330 start.go:296] duration metric: took 126.377039ms for postStartSetup
	I1007 12:31:45.947395  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:45.948057  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.950566  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.950901  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.950931  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.951158  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:45.951337  766330 start.go:128] duration metric: took 27.725842508s to createHost
	I1007 12:31:45.951369  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.953682  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954057  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.954084  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954210  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.954414  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954585  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954727  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.954891  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.955077  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.955089  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:31:46.059048  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304306.039624942
	
	I1007 12:31:46.059075  766330 fix.go:216] guest clock: 1728304306.039624942
	I1007 12:31:46.059083  766330 fix.go:229] Guest: 2024-10-07 12:31:46.039624942 +0000 UTC Remote: 2024-10-07 12:31:45.951349706 +0000 UTC m=+27.845880248 (delta=88.275236ms)
	I1007 12:31:46.059106  766330 fix.go:200] guest clock delta is within tolerance: 88.275236ms
	I1007 12:31:46.059111  766330 start.go:83] releasing machines lock for "ha-053933", held for 27.833688154s
	I1007 12:31:46.059131  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.059394  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:46.062064  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062406  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.062431  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062578  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063106  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063318  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063436  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:31:46.063484  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.063563  766330 ssh_runner.go:195] Run: cat /version.json
	I1007 12:31:46.063582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.066118  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066393  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066431  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066454  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066641  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066729  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066762  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066811  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.066931  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066955  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067124  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.067115  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.067267  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067400  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.143506  766330 ssh_runner.go:195] Run: systemctl --version
	I1007 12:31:46.170858  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:31:46.332209  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:31:46.338580  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:31:46.338677  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:31:46.356826  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:31:46.356863  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:31:46.356954  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:31:46.374524  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:31:46.390007  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:31:46.390089  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:31:46.404935  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:31:46.420186  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:31:46.537561  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:31:46.724537  766330 docker.go:233] disabling docker service ...
	I1007 12:31:46.724631  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:31:46.740520  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:31:46.754710  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:31:46.868070  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:31:46.983211  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:31:46.998357  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:31:47.018646  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:31:47.018734  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.030677  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:31:47.030766  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.042531  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.053856  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.065763  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:31:47.077170  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.088459  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.106901  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.118161  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:31:47.128388  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:31:47.128462  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:31:47.142126  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:31:47.154515  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:47.283963  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:31:47.385321  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:31:47.385405  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:31:47.390485  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:31:47.390552  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:31:47.394825  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:31:47.439074  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:31:47.439187  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.469132  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.501636  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:31:47.503367  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:47.506449  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.506817  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:47.506859  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.507082  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:31:47.511597  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:47.525698  766330 kubeadm.go:883] updating cluster {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:31:47.525829  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:47.525874  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:47.561011  766330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:31:47.561094  766330 ssh_runner.go:195] Run: which lz4
	I1007 12:31:47.565196  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:31:47.565316  766330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:31:47.569571  766330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:31:47.569613  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:31:49.022834  766330 crio.go:462] duration metric: took 1.457534476s to copy over tarball
	I1007 12:31:49.022945  766330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:31:51.131868  766330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108889496s)
	I1007 12:31:51.131914  766330 crio.go:469] duration metric: took 2.109034387s to extract the tarball
	I1007 12:31:51.131926  766330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:31:51.169816  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:51.217403  766330 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:31:51.217431  766330 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:31:51.217440  766330 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.31.1 crio true true} ...
	I1007 12:31:51.217556  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:31:51.217655  766330 ssh_runner.go:195] Run: crio config
	I1007 12:31:51.271379  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:51.271408  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:31:51.271420  766330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:31:51.271445  766330 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-053933 NodeName:ha-053933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:31:51.271623  766330 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-053933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:31:51.271654  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:31:51.271699  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:31:51.289463  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:31:51.289607  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:31:51.289677  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:31:51.300325  766330 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:31:51.300403  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:31:51.311044  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:31:51.329552  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:31:51.347746  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:31:51.366188  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:31:51.384590  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:31:51.388865  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:51.402571  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:51.531092  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:31:51.550538  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.152
	I1007 12:31:51.550568  766330 certs.go:194] generating shared ca certs ...
	I1007 12:31:51.550589  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.550791  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:31:51.550844  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:31:51.550855  766330 certs.go:256] generating profile certs ...
	I1007 12:31:51.550949  766330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:31:51.550971  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt with IP's: []
	I1007 12:31:51.873489  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt ...
	I1007 12:31:51.873532  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt: {Name:mkf7b8a7f4d9827c14fd0fbc8bb02e2f79d65528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873758  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key ...
	I1007 12:31:51.873776  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key: {Name:mk6b5a827040be723c18ebdcd9fe7d1599565bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873894  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a
	I1007 12:31:51.873912  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.254]
	I1007 12:31:52.061549  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a ...
	I1007 12:31:52.061587  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a: {Name:mk1a012d659f1c8c4afc92ca485eba408eb37a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061787  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a ...
	I1007 12:31:52.061804  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a: {Name:mkb1195bd1ddd6ea78076dea0e840887aeae92ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061908  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:31:52.062012  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:31:52.062107  766330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:31:52.062125  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt with IP's: []
	I1007 12:31:52.119663  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt ...
	I1007 12:31:52.119698  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt: {Name:mkf6d674dcac47b878e8df13383f77bcf932d249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.119900  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key ...
	I1007 12:31:52.119913  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key: {Name:mk301510b9dc1296a9e7f127da3f0d4b86905808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.120033  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:31:52.120053  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:31:52.120064  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:31:52.120077  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:31:52.120087  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:31:52.120118  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:31:52.120142  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:31:52.120155  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:31:52.120209  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:31:52.120251  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:31:52.120261  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:31:52.120290  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:31:52.120312  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:31:52.120339  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:31:52.120379  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:52.120408  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.120422  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.120434  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.121128  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:31:52.149003  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:31:52.175017  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:31:52.201648  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:31:52.228352  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:31:52.255290  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:31:52.282215  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:31:52.309286  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:31:52.337694  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:31:52.366883  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:31:52.402754  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:31:52.430306  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:31:52.451397  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:31:52.458450  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:31:52.470676  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476879  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476941  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.483560  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:31:52.495531  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:31:52.507273  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512685  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512760  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.519035  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:31:52.530701  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:31:52.542163  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547093  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547169  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.553420  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:31:52.565081  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:31:52.569549  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:31:52.569630  766330 kubeadm.go:392] StartCluster: {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:52.569737  766330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:31:52.569800  766330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:31:52.613192  766330 cri.go:89] found id: ""
	I1007 12:31:52.613311  766330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:31:52.625713  766330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:31:52.636220  766330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:31:52.646590  766330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:31:52.646626  766330 kubeadm.go:157] found existing configuration files:
	
	I1007 12:31:52.646686  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:31:52.656870  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:31:52.656944  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:31:52.667467  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:31:52.677109  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:31:52.677186  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:31:52.687168  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.696969  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:31:52.697035  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.706604  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:31:52.716252  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:31:52.716325  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:31:52.726572  766330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:31:52.847487  766330 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:31:52.847581  766330 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:31:52.955260  766330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:31:52.955420  766330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:31:52.955545  766330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:31:52.964537  766330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:31:53.051755  766330 out.go:235]   - Generating certificates and keys ...
	I1007 12:31:53.051938  766330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:31:53.052035  766330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:31:53.320791  766330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:31:53.468201  766330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:31:53.842801  766330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:31:53.969642  766330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:31:54.101242  766330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:31:54.101440  766330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.456134  766330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:31:54.456354  766330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.521797  766330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:31:54.769778  766330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:31:55.125227  766330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:31:55.125448  766330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:31:55.361551  766330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:31:55.783698  766330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:31:56.057409  766330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:31:56.211507  766330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:31:56.348279  766330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:31:56.349002  766330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:31:56.353525  766330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:31:56.355620  766330 out.go:235]   - Booting up control plane ...
	I1007 12:31:56.355760  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:31:56.356147  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:31:56.356974  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:31:56.373175  766330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:31:56.381538  766330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:31:56.381594  766330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:31:56.521323  766330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:31:56.521511  766330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:31:57.022943  766330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.739695ms
	I1007 12:31:57.023054  766330 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:32:03.058810  766330 kubeadm.go:310] [api-check] The API server is healthy after 6.037121779s
	I1007 12:32:03.072819  766330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:32:03.101026  766330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:32:03.645977  766330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:32:03.646231  766330 kubeadm.go:310] [mark-control-plane] Marking the node ha-053933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:32:03.661217  766330 kubeadm.go:310] [bootstrap-token] Using token: ofkgus.681l1bfefmhh1xkb
	I1007 12:32:03.662957  766330 out.go:235]   - Configuring RBAC rules ...
	I1007 12:32:03.663116  766330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:32:03.674911  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:32:03.697863  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:32:03.703512  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:32:03.708092  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:32:03.713563  766330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:32:03.734636  766330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:32:03.997011  766330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:32:04.464216  766330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:32:04.465131  766330 kubeadm.go:310] 
	I1007 12:32:04.465191  766330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:32:04.465199  766330 kubeadm.go:310] 
	I1007 12:32:04.465336  766330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:32:04.465360  766330 kubeadm.go:310] 
	I1007 12:32:04.465394  766330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:32:04.465446  766330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:32:04.465491  766330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:32:04.465504  766330 kubeadm.go:310] 
	I1007 12:32:04.465572  766330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:32:04.465599  766330 kubeadm.go:310] 
	I1007 12:32:04.465644  766330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:32:04.465663  766330 kubeadm.go:310] 
	I1007 12:32:04.465719  766330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:32:04.465794  766330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:32:04.465885  766330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:32:04.465901  766330 kubeadm.go:310] 
	I1007 12:32:04.466075  766330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:32:04.466193  766330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:32:04.466201  766330 kubeadm.go:310] 
	I1007 12:32:04.466294  766330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466394  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 12:32:04.466415  766330 kubeadm.go:310] 	--control-plane 
	I1007 12:32:04.466421  766330 kubeadm.go:310] 
	I1007 12:32:04.466490  766330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:32:04.466497  766330 kubeadm.go:310] 
	I1007 12:32:04.466565  766330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466661  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 12:32:04.467760  766330 kubeadm.go:310] W1007 12:31:52.830915     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468039  766330 kubeadm.go:310] W1007 12:31:52.831996     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468166  766330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:32:04.468194  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:32:04.468205  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:32:04.470298  766330 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:32:04.471574  766330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:32:04.477802  766330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:32:04.477826  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:32:04.497072  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:32:04.906135  766330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:32:04.906201  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:04.906237  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933 minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=true
	I1007 12:32:05.063682  766330 ops.go:34] apiserver oom_adj: -16
	I1007 12:32:05.063698  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:05.564187  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.063920  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.563953  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.064483  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.564765  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.064739  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.564036  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.063899  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.198443  766330 kubeadm.go:1113] duration metric: took 4.292302963s to wait for elevateKubeSystemPrivileges
	I1007 12:32:09.198484  766330 kubeadm.go:394] duration metric: took 16.62887336s to StartCluster
	I1007 12:32:09.198511  766330 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.198603  766330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.199399  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.199661  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:32:09.199654  766330 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:09.199683  766330 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:32:09.199750  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:32:09.199769  766330 addons.go:69] Setting storage-provisioner=true in profile "ha-053933"
	I1007 12:32:09.199790  766330 addons.go:234] Setting addon storage-provisioner=true in "ha-053933"
	I1007 12:32:09.199789  766330 addons.go:69] Setting default-storageclass=true in profile "ha-053933"
	I1007 12:32:09.199827  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.199861  766330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-053933"
	I1007 12:32:09.199924  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:09.200250  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200297  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.200379  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200403  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.217502  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I1007 12:32:09.217554  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I1007 12:32:09.217985  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218145  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218593  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218622  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.218725  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218753  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.219006  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219124  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219326  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.219637  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.219691  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.221998  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.222368  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:32:09.223019  766330 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:32:09.223381  766330 addons.go:234] Setting addon default-storageclass=true in "ha-053933"
	I1007 12:32:09.223435  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.223846  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.223902  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.237604  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I1007 12:32:09.238161  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.238820  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.238847  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.239267  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.239621  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.242388  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.242754  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1007 12:32:09.243274  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.243977  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.244007  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.244396  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.244986  766330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:32:09.245068  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.245147  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.246976  766330 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.247004  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:32:09.247031  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.251289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.251823  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.251851  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.252064  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.252294  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.252448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.252580  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.263439  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1007 12:32:09.263833  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.264713  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.264733  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.265269  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.265519  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.267198  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.267411  766330 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:09.267431  766330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:32:09.267448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.271160  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.271638  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.271652  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.272078  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.272247  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.272388  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.272476  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.422833  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:32:09.443940  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.510999  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:10.102670  766330 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:32:10.350678  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350704  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.350784  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350815  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351026  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351046  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351056  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351063  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351128  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.351191  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351222  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351239  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351246  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.352633  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352653  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352669  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.352691  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352714  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352813  766330 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:32:10.352834  766330 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:32:10.352951  766330 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:32:10.352963  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.352974  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.352984  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.364518  766330 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:32:10.365197  766330 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:32:10.365213  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.365222  766330 round_trippers.go:473]     Content-Type: application/json
	I1007 12:32:10.365226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.365229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.368346  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:32:10.368537  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.368555  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.368875  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.368889  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.368895  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.371604  766330 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:32:10.373030  766330 addons.go:510] duration metric: took 1.173351959s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:32:10.373068  766330 start.go:246] waiting for cluster config update ...
	I1007 12:32:10.373085  766330 start.go:255] writing updated cluster config ...
	I1007 12:32:10.375098  766330 out.go:201] 
	I1007 12:32:10.377249  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:10.377439  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.379490  766330 out.go:177] * Starting "ha-053933-m02" control-plane node in "ha-053933" cluster
	I1007 12:32:10.381087  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:32:10.381130  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:32:10.381324  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:32:10.381339  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:32:10.381436  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.381664  766330 start.go:360] acquireMachinesLock for ha-053933-m02: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:32:10.381718  766330 start.go:364] duration metric: took 27.543µs to acquireMachinesLock for "ha-053933-m02"
	I1007 12:32:10.381752  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:10.381840  766330 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:32:10.383550  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:32:10.383680  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:10.383748  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:10.399329  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1007 12:32:10.399900  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:10.400460  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:10.400489  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:10.400855  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:10.401087  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:10.401325  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:10.401564  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:32:10.401597  766330 client.go:168] LocalClient.Create starting
	I1007 12:32:10.401634  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:32:10.401683  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401708  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401774  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:32:10.401806  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401824  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401883  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:32:10.401911  766330 main.go:141] libmachine: (ha-053933-m02) Calling .PreCreateCheck
	I1007 12:32:10.402163  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:10.402584  766330 main.go:141] libmachine: Creating machine...
	I1007 12:32:10.402602  766330 main.go:141] libmachine: (ha-053933-m02) Calling .Create
	I1007 12:32:10.402815  766330 main.go:141] libmachine: (ha-053933-m02) Creating KVM machine...
	I1007 12:32:10.404630  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing default KVM network
	I1007 12:32:10.404848  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing private KVM network mk-ha-053933
	I1007 12:32:10.405187  766330 main.go:141] libmachine: (ha-053933-m02) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.405209  766330 main.go:141] libmachine: (ha-053933-m02) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:32:10.405302  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.405168  766716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.405466  766330 main.go:141] libmachine: (ha-053933-m02) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:32:10.686269  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.686123  766716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa...
	I1007 12:32:10.953304  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953079  766716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk...
	I1007 12:32:10.953335  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing magic tar header
	I1007 12:32:10.953347  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing SSH key tar header
	I1007 12:32:10.953354  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953302  766716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.953491  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02
	I1007 12:32:10.953520  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 (perms=drwx------)
	I1007 12:32:10.953532  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:32:10.953546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.953559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:32:10.953567  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:32:10.953577  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:32:10.953583  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:32:10.953594  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:32:10.953602  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:32:10.953610  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:32:10.953626  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:10.953639  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:32:10.953649  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home
	I1007 12:32:10.953661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Skipping /home - not owner
	I1007 12:32:10.954892  766330 main.go:141] libmachine: (ha-053933-m02) define libvirt domain using xml: 
	I1007 12:32:10.954919  766330 main.go:141] libmachine: (ha-053933-m02) <domain type='kvm'>
	I1007 12:32:10.954926  766330 main.go:141] libmachine: (ha-053933-m02)   <name>ha-053933-m02</name>
	I1007 12:32:10.954934  766330 main.go:141] libmachine: (ha-053933-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:32:10.954971  766330 main.go:141] libmachine: (ha-053933-m02)   <vcpu>2</vcpu>
	I1007 12:32:10.954998  766330 main.go:141] libmachine: (ha-053933-m02)   <features>
	I1007 12:32:10.955008  766330 main.go:141] libmachine: (ha-053933-m02)     <acpi/>
	I1007 12:32:10.955019  766330 main.go:141] libmachine: (ha-053933-m02)     <apic/>
	I1007 12:32:10.955028  766330 main.go:141] libmachine: (ha-053933-m02)     <pae/>
	I1007 12:32:10.955038  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955048  766330 main.go:141] libmachine: (ha-053933-m02)   </features>
	I1007 12:32:10.955059  766330 main.go:141] libmachine: (ha-053933-m02)   <cpu mode='host-passthrough'>
	I1007 12:32:10.955086  766330 main.go:141] libmachine: (ha-053933-m02)   
	I1007 12:32:10.955107  766330 main.go:141] libmachine: (ha-053933-m02)   </cpu>
	I1007 12:32:10.955118  766330 main.go:141] libmachine: (ha-053933-m02)   <os>
	I1007 12:32:10.955130  766330 main.go:141] libmachine: (ha-053933-m02)     <type>hvm</type>
	I1007 12:32:10.955144  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='cdrom'/>
	I1007 12:32:10.955153  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='hd'/>
	I1007 12:32:10.955164  766330 main.go:141] libmachine: (ha-053933-m02)     <bootmenu enable='no'/>
	I1007 12:32:10.955170  766330 main.go:141] libmachine: (ha-053933-m02)   </os>
	I1007 12:32:10.955176  766330 main.go:141] libmachine: (ha-053933-m02)   <devices>
	I1007 12:32:10.955183  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='cdrom'>
	I1007 12:32:10.955199  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/boot2docker.iso'/>
	I1007 12:32:10.955214  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:32:10.955226  766330 main.go:141] libmachine: (ha-053933-m02)       <readonly/>
	I1007 12:32:10.955236  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955247  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='disk'>
	I1007 12:32:10.955259  766330 main.go:141] libmachine: (ha-053933-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:32:10.955273  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk'/>
	I1007 12:32:10.955284  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:32:10.955295  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955317  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955337  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='mk-ha-053933'/>
	I1007 12:32:10.955355  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955372  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955385  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955397  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='default'/>
	I1007 12:32:10.955410  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955419  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955429  766330 main.go:141] libmachine: (ha-053933-m02)     <serial type='pty'>
	I1007 12:32:10.955444  766330 main.go:141] libmachine: (ha-053933-m02)       <target port='0'/>
	I1007 12:32:10.955456  766330 main.go:141] libmachine: (ha-053933-m02)     </serial>
	I1007 12:32:10.955483  766330 main.go:141] libmachine: (ha-053933-m02)     <console type='pty'>
	I1007 12:32:10.955500  766330 main.go:141] libmachine: (ha-053933-m02)       <target type='serial' port='0'/>
	I1007 12:32:10.955516  766330 main.go:141] libmachine: (ha-053933-m02)     </console>
	I1007 12:32:10.955528  766330 main.go:141] libmachine: (ha-053933-m02)     <rng model='virtio'>
	I1007 12:32:10.955541  766330 main.go:141] libmachine: (ha-053933-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:32:10.955552  766330 main.go:141] libmachine: (ha-053933-m02)     </rng>
	I1007 12:32:10.955562  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955574  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955588  766330 main.go:141] libmachine: (ha-053933-m02)   </devices>
	I1007 12:32:10.955599  766330 main.go:141] libmachine: (ha-053933-m02) </domain>
	I1007 12:32:10.955606  766330 main.go:141] libmachine: (ha-053933-m02) 
	I1007 12:32:10.964084  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:92:85:a0 in network default
	I1007 12:32:10.964913  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring networks are active...
	I1007 12:32:10.964943  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:10.966004  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network default is active
	I1007 12:32:10.966794  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network mk-ha-053933 is active
	I1007 12:32:10.967567  766330 main.go:141] libmachine: (ha-053933-m02) Getting domain xml...
	I1007 12:32:10.968704  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:11.328435  766330 main.go:141] libmachine: (ha-053933-m02) Waiting to get IP...
	I1007 12:32:11.329255  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.329657  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.329684  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.329635  766716 retry.go:31] will retry after 304.626046ms: waiting for machine to come up
	I1007 12:32:11.636452  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.636889  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.636919  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.636838  766716 retry.go:31] will retry after 276.587443ms: waiting for machine to come up
	I1007 12:32:11.915507  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.915953  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.915981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.915913  766716 retry.go:31] will retry after 337.132979ms: waiting for machine to come up
	I1007 12:32:12.254562  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.255006  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.255031  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.254957  766716 retry.go:31] will retry after 414.173139ms: waiting for machine to come up
	I1007 12:32:12.670554  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.670981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.671027  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.670964  766716 retry.go:31] will retry after 736.75735ms: waiting for machine to come up
	I1007 12:32:13.409001  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:13.409465  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:13.409492  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:13.409419  766716 retry.go:31] will retry after 877.012423ms: waiting for machine to come up
	I1007 12:32:14.288329  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:14.288723  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:14.288753  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:14.288684  766716 retry.go:31] will retry after 1.037556164s: waiting for machine to come up
	I1007 12:32:15.327401  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:15.327809  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:15.327836  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:15.327768  766716 retry.go:31] will retry after 1.075590546s: waiting for machine to come up
	I1007 12:32:16.404635  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:16.405141  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:16.405170  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:16.405088  766716 retry.go:31] will retry after 1.694642723s: waiting for machine to come up
	I1007 12:32:18.101812  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:18.102290  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:18.102307  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:18.102257  766716 retry.go:31] will retry after 2.246296895s: waiting for machine to come up
	I1007 12:32:20.351742  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:20.352251  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:20.352273  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:20.352201  766716 retry.go:31] will retry after 2.298110151s: waiting for machine to come up
	I1007 12:32:22.653604  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:22.654280  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:22.654305  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:22.654158  766716 retry.go:31] will retry after 3.347094149s: waiting for machine to come up
	I1007 12:32:26.003104  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:26.003592  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:26.003618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:26.003545  766716 retry.go:31] will retry after 3.946300567s: waiting for machine to come up
	I1007 12:32:29.951184  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:29.951661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:29.951683  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:29.951615  766716 retry.go:31] will retry after 4.942604939s: waiting for machine to come up
	I1007 12:32:34.900038  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900804  766330 main.go:141] libmachine: (ha-053933-m02) Found IP for machine: 192.168.39.227
	I1007 12:32:34.900839  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900847  766330 main.go:141] libmachine: (ha-053933-m02) Reserving static IP address...
	I1007 12:32:34.901345  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "ha-053933-m02", mac: "52:54:00:e8:71:ec", ip: "192.168.39.227"} in network mk-ha-053933
	I1007 12:32:34.989559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:34.989593  766330 main.go:141] libmachine: (ha-053933-m02) Reserved static IP address: 192.168.39.227
	I1007 12:32:34.989607  766330 main.go:141] libmachine: (ha-053933-m02) Waiting for SSH to be available...
	I1007 12:32:34.993000  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.993348  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933
	I1007 12:32:34.993372  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:e8:71:ec
	I1007 12:32:34.993535  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:34.993565  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:34.993595  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:34.993608  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:34.993625  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:34.997438  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:32:34.997462  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:32:34.997471  766330 main.go:141] libmachine: (ha-053933-m02) DBG | command : exit 0
	I1007 12:32:34.997493  766330 main.go:141] libmachine: (ha-053933-m02) DBG | err     : exit status 255
	I1007 12:32:34.997502  766330 main.go:141] libmachine: (ha-053933-m02) DBG | output  : 
	I1007 12:32:38.000138  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:38.003563  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.003934  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.003965  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.004068  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:38.004097  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:38.004133  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:38.004156  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:38.004198  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:38.134356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:32:38.134575  766330 main.go:141] libmachine: (ha-053933-m02) KVM machine creation complete!
	I1007 12:32:38.134919  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:38.135497  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135718  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135838  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:32:38.135854  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetState
	I1007 12:32:38.137125  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:32:38.137139  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:32:38.137144  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:32:38.137149  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.139531  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140008  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.140029  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140173  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.140353  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140459  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.140739  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.140945  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.140955  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:32:38.245844  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.245874  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:32:38.245883  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.249067  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249461  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.249482  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249773  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.249996  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250184  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250363  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.250493  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.250691  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.250704  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:32:38.363524  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:32:38.363625  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:32:38.363640  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:32:38.363656  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364053  766330 buildroot.go:166] provisioning hostname "ha-053933-m02"
	I1007 12:32:38.364084  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364321  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.367546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368073  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.368107  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368323  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.368535  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368704  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368874  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.369073  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.369311  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.369326  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m02 && echo "ha-053933-m02" | sudo tee /etc/hostname
	I1007 12:32:38.493958  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m02
	
	I1007 12:32:38.493990  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.496774  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497161  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.497193  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497352  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.497571  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497746  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497916  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.498140  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.498312  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.498329  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:32:38.616208  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.616246  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:32:38.616266  766330 buildroot.go:174] setting up certificates
	I1007 12:32:38.616276  766330 provision.go:84] configureAuth start
	I1007 12:32:38.616286  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.616609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:38.619075  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619398  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.619427  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619572  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.621757  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622105  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.622129  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622285  766330 provision.go:143] copyHostCerts
	I1007 12:32:38.622318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622352  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:32:38.622361  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622432  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:32:38.622511  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622529  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:32:38.622535  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622558  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:32:38.622599  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622622  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:32:38.622630  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622663  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:32:38.622733  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m02 san=[127.0.0.1 192.168.39.227 ha-053933-m02 localhost minikube]
	I1007 12:32:38.708452  766330 provision.go:177] copyRemoteCerts
	I1007 12:32:38.708528  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:32:38.708564  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.710962  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711285  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.711318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711472  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.711655  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.711820  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.711918  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:38.799093  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:32:38.799174  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:32:38.827105  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:32:38.827188  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:32:38.854871  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:32:38.854953  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:32:38.882148  766330 provision.go:87] duration metric: took 265.856123ms to configureAuth
	I1007 12:32:38.882180  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:32:38.882387  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:38.882485  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.885151  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885511  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.885545  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885761  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.885978  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886151  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886344  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.886506  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.886695  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.886715  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:32:39.128135  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:32:39.128167  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:32:39.128176  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetURL
	I1007 12:32:39.129618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using libvirt version 6000000
	I1007 12:32:39.132019  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132387  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.132415  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132625  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:32:39.132640  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:32:39.132647  766330 client.go:171] duration metric: took 28.73104158s to LocalClient.Create
	I1007 12:32:39.132672  766330 start.go:167] duration metric: took 28.731111532s to libmachine.API.Create "ha-053933"
	I1007 12:32:39.132682  766330 start.go:293] postStartSetup for "ha-053933-m02" (driver="kvm2")
	I1007 12:32:39.132692  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:32:39.132710  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.132980  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:32:39.133017  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.135744  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136124  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.136167  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136341  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.136530  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.136675  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.136835  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.221605  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:32:39.226484  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:32:39.226514  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:32:39.226584  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:32:39.226655  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:32:39.226665  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:32:39.226746  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:32:39.237427  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:39.261998  766330 start.go:296] duration metric: took 129.301228ms for postStartSetup
	I1007 12:32:39.262093  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:39.262719  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.265384  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.265792  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.265819  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.266155  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:39.266397  766330 start.go:128] duration metric: took 28.884542194s to createHost
	I1007 12:32:39.266428  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.268718  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.268995  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.269035  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.269138  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.269298  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269463  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269575  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.269703  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:39.269878  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:39.269888  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:32:39.379504  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304359.360836408
	
	I1007 12:32:39.379530  766330 fix.go:216] guest clock: 1728304359.360836408
	I1007 12:32:39.379539  766330 fix.go:229] Guest: 2024-10-07 12:32:39.360836408 +0000 UTC Remote: 2024-10-07 12:32:39.26641087 +0000 UTC m=+81.160941412 (delta=94.425538ms)
	I1007 12:32:39.379557  766330 fix.go:200] guest clock delta is within tolerance: 94.425538ms
	I1007 12:32:39.379562  766330 start.go:83] releasing machines lock for "ha-053933-m02", held for 28.997822917s
	I1007 12:32:39.379579  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.379889  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.383410  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.383763  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.383796  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.386874  766330 out.go:177] * Found network options:
	I1007 12:32:39.388989  766330 out.go:177]   - NO_PROXY=192.168.39.152
	W1007 12:32:39.390421  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.390479  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391270  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391484  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391605  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:32:39.391666  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	W1007 12:32:39.391801  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.391871  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:32:39.391887  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.394867  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.394901  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395284  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395339  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395674  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395681  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395918  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.395928  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.396088  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396100  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396238  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.396245  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.642441  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:32:39.649674  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:32:39.649767  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:32:39.666653  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:32:39.666687  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:32:39.666767  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:32:39.684589  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:32:39.700168  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:32:39.700231  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:32:39.716005  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:32:39.731764  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:32:39.862714  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:32:40.011007  766330 docker.go:233] disabling docker service ...
	I1007 12:32:40.011096  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:32:40.027322  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:32:40.041607  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:32:40.187585  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:32:40.331438  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:32:40.347382  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:32:40.367495  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:32:40.367556  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.379748  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:32:40.379840  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.391760  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.403745  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.415505  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:32:40.428366  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.441667  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.460916  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.473748  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:32:40.485573  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:32:40.485645  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:32:40.500703  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:32:40.512028  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:40.646960  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:32:40.739246  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:32:40.739338  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:32:40.744292  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:32:40.744359  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:32:40.748439  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:32:40.790232  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:32:40.790320  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.827829  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.860461  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:32:40.862462  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:32:40.864274  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:40.867846  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868296  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:40.868323  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868742  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:32:40.873673  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:40.887367  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:32:40.887606  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:40.887888  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.887931  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.903464  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1007 12:32:40.903898  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.904410  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.904433  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.904903  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.905134  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:40.906904  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:40.907228  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.907278  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.922960  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I1007 12:32:40.923502  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.924055  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.924078  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.924407  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.924586  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:40.924737  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.227
	I1007 12:32:40.924756  766330 certs.go:194] generating shared ca certs ...
	I1007 12:32:40.924778  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:40.924946  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:32:40.925010  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:32:40.925020  766330 certs.go:256] generating profile certs ...
	I1007 12:32:40.925169  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:32:40.925208  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90
	I1007 12:32:40.925226  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.254]
	I1007 12:32:41.148971  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 ...
	I1007 12:32:41.149006  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90: {Name:mkfc72ac98e5f64b1efa978f83502cc26e6b00b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149188  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 ...
	I1007 12:32:41.149202  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90: {Name:mkb6d827b308c96cc8f5173b1a5723adff201a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149277  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:32:41.149418  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:32:41.149564  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:32:41.149589  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:32:41.149603  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:32:41.149618  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:32:41.149632  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:32:41.149645  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:32:41.149658  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:32:41.149670  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:32:41.149681  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:32:41.149730  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:32:41.149764  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:32:41.149774  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:32:41.149801  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:32:41.149822  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:32:41.149848  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:32:41.149885  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:41.149911  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.149925  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.149937  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.149971  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:41.153293  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153635  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:41.153659  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:41.154192  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:41.154376  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:41.154520  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:41.226577  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:32:41.232730  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:32:41.245060  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:32:41.251197  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:32:41.264593  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:32:41.269517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:32:41.281560  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:32:41.286754  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:32:41.299707  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:32:41.304594  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:32:41.317916  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:32:41.323393  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:32:41.336013  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:32:41.366179  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:32:41.393458  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:32:41.419874  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:32:41.447814  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:32:41.474678  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:32:41.500522  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:32:41.527411  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:32:41.552513  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:32:41.576732  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:32:41.602701  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:32:41.628143  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:32:41.644998  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:32:41.662248  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:32:41.679785  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:32:41.698239  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:32:41.717010  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:32:41.735412  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:32:41.753557  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:32:41.759787  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:32:41.771601  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776332  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776414  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.782579  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:32:41.793992  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:32:41.806293  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811220  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811296  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.817656  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:32:41.829292  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:32:41.840880  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845905  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845988  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.852343  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:32:41.864190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:32:41.868675  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:32:41.868747  766330 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I1007 12:32:41.868844  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:32:41.868868  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:32:41.868905  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:32:41.889715  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:32:41.889813  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:32:41.889876  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.901277  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:32:41.901344  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.911928  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:32:41.911964  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912020  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912066  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:32:41.912079  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:32:41.917061  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:32:41.917099  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:32:42.483195  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.483287  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.490132  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:32:42.490184  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:32:42.569436  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:32:42.620637  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.620740  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.635485  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:32:42.635527  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:32:43.157634  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:32:43.168142  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:32:43.185353  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:32:43.203562  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:32:43.222930  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:32:43.227330  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:43.240979  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:43.377709  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:32:43.396837  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:43.397301  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:43.397366  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:43.414130  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1007 12:32:43.414696  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:43.415312  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:43.415338  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:43.415686  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:43.415901  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:43.416102  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:32:43.416222  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:32:43.416248  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:43.419194  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419695  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:43.419728  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419860  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:43.420045  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:43.420225  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:43.420387  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:43.569631  766330 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:43.569697  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I1007 12:33:05.382098  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (21.812371374s)
	I1007 12:33:05.382136  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:33:05.983459  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m02 minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:33:06.136889  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:33:06.286153  766330 start.go:319] duration metric: took 22.870046293s to joinCluster
	I1007 12:33:06.286246  766330 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:06.286558  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:06.288312  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:33:06.290220  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:06.583421  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:06.686534  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:33:06.686755  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:33:06.686819  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:33:06.687163  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:06.687340  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:06.687357  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:06.687368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:06.687373  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:06.711245  766330 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1007 12:33:07.188212  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.188242  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.188255  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.188274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.191359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:07.688452  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.688484  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.688497  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.688502  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.808189  766330 round_trippers.go:574] Response Status: 200 OK in 119 milliseconds
	I1007 12:33:08.187451  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.187480  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.187491  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.187496  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.191935  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:08.687677  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.687701  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.687711  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.687719  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.690915  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:08.691670  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:09.188237  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.188270  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.188281  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.188289  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.194158  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:09.687515  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.687547  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.687557  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.687562  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.690808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.188360  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.188385  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.188394  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.188400  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.191880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.688056  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.688084  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.688096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.688104  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.691003  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:11.188165  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.188195  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.188206  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.188211  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.191751  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:11.192284  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:11.687697  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.687733  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.687744  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.687751  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.692471  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:12.187925  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.187959  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.187971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.187977  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.191580  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:12.687588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.687620  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.687631  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.687637  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.691690  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:13.187912  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.187949  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.187959  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.187964  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.191046  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.688329  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.688359  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.688370  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.688374  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.692160  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.692713  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:14.188174  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.188198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.188207  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.188210  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.197312  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:33:14.688323  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.688353  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.688364  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.688369  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.692255  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.188273  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.188299  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.188309  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.188312  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.191633  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.688194  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.688221  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.688229  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.688233  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.691201  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:16.188087  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.188118  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.188130  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.188136  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.191654  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:16.192613  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:16.688084  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.688116  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.688127  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.688131  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.691196  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.188046  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.188079  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.188091  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.188099  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.191563  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.687488  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.687515  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.687523  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.687527  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.692225  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:18.187466  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.187496  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.187508  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.187513  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.190916  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.688169  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.688198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.688209  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.688214  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.691684  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.692180  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:19.188410  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.188443  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.188455  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.188461  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.191778  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:19.687861  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.687898  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.687909  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.687918  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.692517  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.187370  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.187394  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.187404  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.187409  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.190680  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.688383  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.688409  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.688418  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.688422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.692411  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.692972  766330 node_ready.go:49] node "ha-053933-m02" has status "Ready":"True"
	I1007 12:33:20.692999  766330 node_ready.go:38] duration metric: took 14.005807631s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:20.693012  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:20.693143  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:20.693154  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.693162  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.693165  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.697361  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.703660  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.703786  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:33:20.703796  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.703803  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.703807  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.707181  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.708043  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.708061  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.708069  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.708074  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.710812  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.711426  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.711448  766330 pod_ready.go:82] duration metric: took 7.751816ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711460  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:33:20.711534  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.711542  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.711545  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.714909  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.715901  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.715918  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.715927  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.715934  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.719555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.720647  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.720668  766330 pod_ready.go:82] duration metric: took 9.201382ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720679  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720751  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:33:20.720759  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.720768  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.720773  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.723495  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.724196  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.724215  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.724226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.724229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.726952  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.727595  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.727616  766330 pod_ready.go:82] duration metric: took 6.930211ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727627  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727692  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:20.727700  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.727714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.727718  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.731049  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.731750  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.731766  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.731786  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.731793  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.734880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.228231  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.228260  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.228274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.228281  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.231667  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.232387  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.232407  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.232416  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.232422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.235588  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.728588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.728616  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.728628  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.728635  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.732106  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.732770  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.732786  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.732795  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.732798  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.735773  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.228683  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:22.228711  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.228720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.228724  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232193  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.232808  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.232825  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.232834  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232839  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.235792  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.236315  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.236338  766330 pod_ready.go:82] duration metric: took 1.508704734s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236354  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236419  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:33:22.236427  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.236434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.236438  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.239818  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.288880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:22.288905  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.288915  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.288920  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.292489  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.293074  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.293096  766330 pod_ready.go:82] duration metric: took 56.735786ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.293107  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.488539  766330 request.go:632] Waited for 195.305457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488616  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488627  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.488640  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.488646  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.492086  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.688457  766330 request.go:632] Waited for 195.312015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688532  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688537  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.688546  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.688550  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.691998  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.692647  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.692670  766330 pod_ready.go:82] duration metric: took 399.55659ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.692683  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.888729  766330 request.go:632] Waited for 195.939419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888840  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888849  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.888862  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.888872  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.892505  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.088565  766330 request.go:632] Waited for 195.365241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088643  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088651  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.088662  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.088670  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.091637  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.092259  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.092277  766330 pod_ready.go:82] duration metric: took 399.588182ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.092289  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.289099  766330 request.go:632] Waited for 196.721146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289204  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289216  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.289227  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.289236  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.292352  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.488835  766330 request.go:632] Waited for 195.58765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488907  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488912  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.488920  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.488925  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.491857  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.492343  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.492364  766330 pod_ready.go:82] duration metric: took 400.067435ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.492375  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.688407  766330 request.go:632] Waited for 195.943093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688521  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688529  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.688538  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.688543  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.692233  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.888501  766330 request.go:632] Waited for 195.323816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888614  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888622  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.888633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.888639  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.892680  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:23.893104  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.893123  766330 pod_ready.go:82] duration metric: took 400.740542ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.893133  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.089301  766330 request.go:632] Waited for 196.068782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089368  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089374  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.089388  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.089395  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.092648  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.288647  766330 request.go:632] Waited for 195.319776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288759  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288778  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.288794  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.288805  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.292348  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.292959  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.292988  766330 pod_ready.go:82] duration metric: took 399.844819ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.293007  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.489072  766330 request.go:632] Waited for 195.96428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489149  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489157  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.489167  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.489175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.492662  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.688896  766330 request.go:632] Waited for 195.439422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689009  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689017  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.689029  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.689035  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.692350  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.692962  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.692988  766330 pod_ready.go:82] duration metric: took 399.970822ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.693003  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.889214  766330 request.go:632] Waited for 196.093786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889300  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889309  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.889322  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.889329  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.892619  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.088740  766330 request.go:632] Waited for 195.405391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088815  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088821  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.088831  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.088837  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.092543  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.093141  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:25.093166  766330 pod_ready.go:82] duration metric: took 400.155132ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:25.093183  766330 pod_ready.go:39] duration metric: took 4.400126454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:25.093213  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:33:25.093283  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:33:25.111694  766330 api_server.go:72] duration metric: took 18.825401123s to wait for apiserver process to appear ...
	I1007 12:33:25.111735  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:33:25.111762  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:33:25.118517  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:33:25.118624  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:33:25.118639  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.118651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.118656  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.119598  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:33:25.119715  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:33:25.119734  766330 api_server.go:131] duration metric: took 7.991573ms to wait for apiserver health ...
	I1007 12:33:25.119743  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:33:25.289166  766330 request.go:632] Waited for 169.340781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289250  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289255  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.289263  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.289268  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.295241  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.299874  766330 system_pods.go:59] 17 kube-system pods found
	I1007 12:33:25.299914  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.299919  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.299923  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.299926  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.299929  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.299933  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.299938  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.299941  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.299944  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.299947  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.299950  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.299953  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.299956  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.299959  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.299962  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.300005  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.300042  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.300050  766330 system_pods.go:74] duration metric: took 180.300279ms to wait for pod list to return data ...
	I1007 12:33:25.300061  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:33:25.489349  766330 request.go:632] Waited for 189.154197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489422  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489429  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.489441  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.489451  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.493783  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.494042  766330 default_sa.go:45] found service account: "default"
	I1007 12:33:25.494060  766330 default_sa.go:55] duration metric: took 193.9912ms for default service account to be created ...
	I1007 12:33:25.494070  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:33:25.688474  766330 request.go:632] Waited for 194.303496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688554  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688560  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.688568  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.688572  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.694194  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.700121  766330 system_pods.go:86] 17 kube-system pods found
	I1007 12:33:25.700159  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.700167  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.700179  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.700185  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.700191  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.700196  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.700202  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.700207  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.700213  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.700218  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.700223  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.700228  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.700233  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.700242  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.700248  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.700255  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.700258  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.700266  766330 system_pods.go:126] duration metric: took 206.189927ms to wait for k8s-apps to be running ...
	I1007 12:33:25.700277  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:33:25.700338  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:25.716873  766330 system_svc.go:56] duration metric: took 16.577644ms WaitForService to wait for kubelet
	I1007 12:33:25.716918  766330 kubeadm.go:582] duration metric: took 19.430632885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:33:25.716946  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:33:25.889445  766330 request.go:632] Waited for 172.381554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889527  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889535  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.889543  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.889547  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.893637  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.894406  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894446  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894466  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894476  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894483  766330 node_conditions.go:105] duration metric: took 177.530833ms to run NodePressure ...
	I1007 12:33:25.894499  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:33:25.894527  766330 start.go:255] writing updated cluster config ...
	I1007 12:33:25.896984  766330 out.go:201] 
	I1007 12:33:25.898622  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:25.898739  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.900470  766330 out.go:177] * Starting "ha-053933-m03" control-plane node in "ha-053933" cluster
	I1007 12:33:25.901744  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:33:25.901777  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:33:25.901887  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:33:25.901898  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:33:25.901996  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.902210  766330 start.go:360] acquireMachinesLock for ha-053933-m03: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:33:25.902261  766330 start.go:364] duration metric: took 29.008µs to acquireMachinesLock for "ha-053933-m03"
	I1007 12:33:25.902279  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:25.902373  766330 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:33:25.903871  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:33:25.903977  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:25.904021  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:25.919504  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36877
	I1007 12:33:25.920002  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:25.920499  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:25.920525  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:25.920897  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:25.921112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:25.921261  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:25.921411  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:33:25.921445  766330 client.go:168] LocalClient.Create starting
	I1007 12:33:25.921486  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:33:25.921530  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921554  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921635  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:33:25.921664  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921680  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921706  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:33:25.921718  766330 main.go:141] libmachine: (ha-053933-m03) Calling .PreCreateCheck
	I1007 12:33:25.921884  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:25.922300  766330 main.go:141] libmachine: Creating machine...
	I1007 12:33:25.922316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .Create
	I1007 12:33:25.922510  766330 main.go:141] libmachine: (ha-053933-m03) Creating KVM machine...
	I1007 12:33:25.923845  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing default KVM network
	I1007 12:33:25.924001  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing private KVM network mk-ha-053933
	I1007 12:33:25.924170  766330 main.go:141] libmachine: (ha-053933-m03) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:25.924210  766330 main.go:141] libmachine: (ha-053933-m03) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:33:25.924298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:25.924182  767113 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:25.924373  766330 main.go:141] libmachine: (ha-053933-m03) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:33:26.206977  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.206809  767113 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa...
	I1007 12:33:26.524415  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524231  767113 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk...
	I1007 12:33:26.524455  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing magic tar header
	I1007 12:33:26.524470  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing SSH key tar header
	I1007 12:33:26.524482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524376  767113 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:26.524496  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03
	I1007 12:33:26.524534  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 (perms=drwx------)
	I1007 12:33:26.524574  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:33:26.524585  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:33:26.524600  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:33:26.524609  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:33:26.524638  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:26.524653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:33:26.524661  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:33:26.524670  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:33:26.524678  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home
	I1007 12:33:26.524693  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Skipping /home - not owner
	I1007 12:33:26.524703  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:33:26.524718  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:33:26.524726  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.525722  766330 main.go:141] libmachine: (ha-053933-m03) define libvirt domain using xml: 
	I1007 12:33:26.525747  766330 main.go:141] libmachine: (ha-053933-m03) <domain type='kvm'>
	I1007 12:33:26.525776  766330 main.go:141] libmachine: (ha-053933-m03)   <name>ha-053933-m03</name>
	I1007 12:33:26.525795  766330 main.go:141] libmachine: (ha-053933-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:33:26.525808  766330 main.go:141] libmachine: (ha-053933-m03)   <vcpu>2</vcpu>
	I1007 12:33:26.525818  766330 main.go:141] libmachine: (ha-053933-m03)   <features>
	I1007 12:33:26.525830  766330 main.go:141] libmachine: (ha-053933-m03)     <acpi/>
	I1007 12:33:26.525838  766330 main.go:141] libmachine: (ha-053933-m03)     <apic/>
	I1007 12:33:26.525850  766330 main.go:141] libmachine: (ha-053933-m03)     <pae/>
	I1007 12:33:26.525859  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.525905  766330 main.go:141] libmachine: (ha-053933-m03)   </features>
	I1007 12:33:26.525934  766330 main.go:141] libmachine: (ha-053933-m03)   <cpu mode='host-passthrough'>
	I1007 12:33:26.525945  766330 main.go:141] libmachine: (ha-053933-m03)   
	I1007 12:33:26.525955  766330 main.go:141] libmachine: (ha-053933-m03)   </cpu>
	I1007 12:33:26.525965  766330 main.go:141] libmachine: (ha-053933-m03)   <os>
	I1007 12:33:26.525971  766330 main.go:141] libmachine: (ha-053933-m03)     <type>hvm</type>
	I1007 12:33:26.525976  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='cdrom'/>
	I1007 12:33:26.525983  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='hd'/>
	I1007 12:33:26.525988  766330 main.go:141] libmachine: (ha-053933-m03)     <bootmenu enable='no'/>
	I1007 12:33:26.525995  766330 main.go:141] libmachine: (ha-053933-m03)   </os>
	I1007 12:33:26.526002  766330 main.go:141] libmachine: (ha-053933-m03)   <devices>
	I1007 12:33:26.526013  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='cdrom'>
	I1007 12:33:26.526054  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/boot2docker.iso'/>
	I1007 12:33:26.526067  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:33:26.526077  766330 main.go:141] libmachine: (ha-053933-m03)       <readonly/>
	I1007 12:33:26.526087  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526099  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='disk'>
	I1007 12:33:26.526109  766330 main.go:141] libmachine: (ha-053933-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:33:26.526124  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk'/>
	I1007 12:33:26.526142  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:33:26.526153  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526162  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526172  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='mk-ha-053933'/>
	I1007 12:33:26.526180  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526189  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526201  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526212  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='default'/>
	I1007 12:33:26.526219  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526233  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526252  766330 main.go:141] libmachine: (ha-053933-m03)     <serial type='pty'>
	I1007 12:33:26.526271  766330 main.go:141] libmachine: (ha-053933-m03)       <target port='0'/>
	I1007 12:33:26.526293  766330 main.go:141] libmachine: (ha-053933-m03)     </serial>
	I1007 12:33:26.526317  766330 main.go:141] libmachine: (ha-053933-m03)     <console type='pty'>
	I1007 12:33:26.526331  766330 main.go:141] libmachine: (ha-053933-m03)       <target type='serial' port='0'/>
	I1007 12:33:26.526341  766330 main.go:141] libmachine: (ha-053933-m03)     </console>
	I1007 12:33:26.526352  766330 main.go:141] libmachine: (ha-053933-m03)     <rng model='virtio'>
	I1007 12:33:26.526364  766330 main.go:141] libmachine: (ha-053933-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:33:26.526375  766330 main.go:141] libmachine: (ha-053933-m03)     </rng>
	I1007 12:33:26.526382  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526387  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526400  766330 main.go:141] libmachine: (ha-053933-m03)   </devices>
	I1007 12:33:26.526412  766330 main.go:141] libmachine: (ha-053933-m03) </domain>
	I1007 12:33:26.526422  766330 main.go:141] libmachine: (ha-053933-m03) 
	I1007 12:33:26.533781  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:c6:4c:5a in network default
	I1007 12:33:26.534377  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring networks are active...
	I1007 12:33:26.534401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.535036  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network default is active
	I1007 12:33:26.535318  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network mk-ha-053933 is active
	I1007 12:33:26.535654  766330 main.go:141] libmachine: (ha-053933-m03) Getting domain xml...
	I1007 12:33:26.536349  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.886582  766330 main.go:141] libmachine: (ha-053933-m03) Waiting to get IP...
	I1007 12:33:26.887435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.887805  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:26.887834  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.887787  767113 retry.go:31] will retry after 278.405187ms: waiting for machine to come up
	I1007 12:33:27.168357  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.168978  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.169005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.168920  767113 retry.go:31] will retry after 329.830323ms: waiting for machine to come up
	I1007 12:33:27.500231  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.500684  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.500728  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.500604  767113 retry.go:31] will retry after 372.653315ms: waiting for machine to come up
	I1007 12:33:27.875190  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.875624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.875654  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.875577  767113 retry.go:31] will retry after 444.943717ms: waiting for machine to come up
	I1007 12:33:28.322485  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.322945  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.322970  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.322909  767113 retry.go:31] will retry after 669.257582ms: waiting for machine to come up
	I1007 12:33:28.994144  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.994697  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.994715  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.994632  767113 retry.go:31] will retry after 733.137025ms: waiting for machine to come up
	I1007 12:33:29.729782  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:29.730264  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:29.730293  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:29.730214  767113 retry.go:31] will retry after 899.738353ms: waiting for machine to come up
	I1007 12:33:30.632328  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:30.632890  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:30.632916  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:30.632809  767113 retry.go:31] will retry after 931.890845ms: waiting for machine to come up
	I1007 12:33:31.566008  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:31.566423  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:31.566453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:31.566382  767113 retry.go:31] will retry after 1.324143868s: waiting for machine to come up
	I1007 12:33:32.892206  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:32.892600  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:32.892624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:32.892560  767113 retry.go:31] will retry after 1.884957277s: waiting for machine to come up
	I1007 12:33:34.779972  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:34.780414  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:34.780482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:34.780403  767113 retry.go:31] will retry after 2.797940617s: waiting for machine to come up
	I1007 12:33:37.580503  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:37.580938  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:37.581017  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:37.580916  767113 retry.go:31] will retry after 3.450180083s: waiting for machine to come up
	I1007 12:33:41.032804  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:41.033196  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:41.033227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:41.033144  767113 retry.go:31] will retry after 3.620491508s: waiting for machine to come up
	I1007 12:33:44.657262  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:44.657724  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:44.657749  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:44.657677  767113 retry.go:31] will retry after 4.652577623s: waiting for machine to come up
	I1007 12:33:49.314220  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314598  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314619  766330 main.go:141] libmachine: (ha-053933-m03) Found IP for machine: 192.168.39.53
	I1007 12:33:49.314644  766330 main.go:141] libmachine: (ha-053933-m03) Reserving static IP address...
	I1007 12:33:49.315014  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "ha-053933-m03", mac: "52:54:00:92:71:bc", ip: "192.168.39.53"} in network mk-ha-053933
	I1007 12:33:49.395618  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:49.395664  766330 main.go:141] libmachine: (ha-053933-m03) Reserved static IP address: 192.168.39.53
	I1007 12:33:49.395679  766330 main.go:141] libmachine: (ha-053933-m03) Waiting for SSH to be available...
	I1007 12:33:49.398571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.398960  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933
	I1007 12:33:49.398990  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:92:71:bc
	I1007 12:33:49.399160  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:49.399184  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:49.399214  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:49.399227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:49.399241  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:49.403005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:33:49.403027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:33:49.403035  766330 main.go:141] libmachine: (ha-053933-m03) DBG | command : exit 0
	I1007 12:33:49.403039  766330 main.go:141] libmachine: (ha-053933-m03) DBG | err     : exit status 255
	I1007 12:33:49.403074  766330 main.go:141] libmachine: (ha-053933-m03) DBG | output  : 
	I1007 12:33:52.403247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:52.406252  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.406668  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.406699  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.407002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:52.407027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:52.407053  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:52.407069  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:52.407109  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:52.534915  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:33:52.535288  766330 main.go:141] libmachine: (ha-053933-m03) KVM machine creation complete!
	I1007 12:33:52.535635  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:52.536389  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536639  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536874  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:33:52.536891  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetState
	I1007 12:33:52.538444  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:33:52.538462  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:33:52.538469  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:33:52.538476  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.541542  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.541939  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.541963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.542112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.542296  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542481  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542677  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.542861  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.543138  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.543151  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:33:52.649741  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:52.649782  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:33:52.649794  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.652589  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.652969  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.653002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.653140  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.653374  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653551  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653673  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.653873  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.654072  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.654084  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:33:52.759715  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:33:52.759834  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:33:52.759854  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:33:52.759868  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760230  766330 buildroot.go:166] provisioning hostname "ha-053933-m03"
	I1007 12:33:52.760268  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760500  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.763370  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.763827  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.763857  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.764033  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.764271  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764477  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764633  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.764776  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.764967  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.764978  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m03 && echo "ha-053933-m03" | sudo tee /etc/hostname
	I1007 12:33:52.887558  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m03
	
	I1007 12:33:52.887587  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.890785  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.891281  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891393  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.891600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.891855  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.892166  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.892433  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.892634  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.892651  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:33:53.009149  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:53.009337  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:33:53.009478  766330 buildroot.go:174] setting up certificates
	I1007 12:33:53.009552  766330 provision.go:84] configureAuth start
	I1007 12:33:53.009602  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:53.009986  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.012616  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.012988  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.013047  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.013159  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.015298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015632  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.015653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015824  766330 provision.go:143] copyHostCerts
	I1007 12:33:53.015867  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.015916  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:33:53.015927  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.016009  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:33:53.016125  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016152  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:33:53.016162  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016198  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:33:53.016272  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016302  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:33:53.016310  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016353  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:33:53.016436  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m03 san=[127.0.0.1 192.168.39.53 ha-053933-m03 localhost minikube]
	I1007 12:33:53.275511  766330 provision.go:177] copyRemoteCerts
	I1007 12:33:53.275578  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:33:53.275609  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.278571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.278958  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.278997  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.279237  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.279470  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.279694  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.279856  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.365609  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:33:53.365705  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:33:53.394108  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:33:53.394203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:33:53.421846  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:33:53.421930  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:33:53.448310  766330 provision.go:87] duration metric: took 438.733854ms to configureAuth
	I1007 12:33:53.448346  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:33:53.448616  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:53.448711  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.451435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.451928  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.451963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.452102  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.452316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452472  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452605  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.452784  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.452957  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.452971  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:33:53.686714  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:33:53.686753  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:33:53.686762  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetURL
	I1007 12:33:53.688034  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using libvirt version 6000000
	I1007 12:33:53.690553  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691049  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.691081  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691275  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:33:53.691309  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:33:53.691317  766330 client.go:171] duration metric: took 27.769860907s to LocalClient.Create
	I1007 12:33:53.691347  766330 start.go:167] duration metric: took 27.76993753s to libmachine.API.Create "ha-053933"
	I1007 12:33:53.691356  766330 start.go:293] postStartSetup for "ha-053933-m03" (driver="kvm2")
	I1007 12:33:53.691366  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:33:53.691384  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.691657  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:33:53.691683  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.693729  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694161  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.694191  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694359  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.694535  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.694715  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.694900  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.777573  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:33:53.782595  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:33:53.782625  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:33:53.782710  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:33:53.782804  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:33:53.782816  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:33:53.782918  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:33:53.793716  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:53.819127  766330 start.go:296] duration metric: took 127.75028ms for postStartSetup
	I1007 12:33:53.819228  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:53.819965  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.822875  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823288  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.823318  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823585  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:53.823804  766330 start.go:128] duration metric: took 27.921419624s to createHost
	I1007 12:33:53.823830  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.826389  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826755  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.826788  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826991  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.827187  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827532  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.827708  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.827909  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.827922  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:33:53.935241  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304433.915881343
	
	I1007 12:33:53.935272  766330 fix.go:216] guest clock: 1728304433.915881343
	I1007 12:33:53.935282  766330 fix.go:229] Guest: 2024-10-07 12:33:53.915881343 +0000 UTC Remote: 2024-10-07 12:33:53.823818192 +0000 UTC m=+155.718348733 (delta=92.063151ms)
	I1007 12:33:53.935303  766330 fix.go:200] guest clock delta is within tolerance: 92.063151ms
	I1007 12:33:53.935309  766330 start.go:83] releasing machines lock for "ha-053933-m03", held for 28.033038751s
	I1007 12:33:53.935340  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.935600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.938944  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.939372  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.939401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.942103  766330 out.go:177] * Found network options:
	I1007 12:33:53.943700  766330 out.go:177]   - NO_PROXY=192.168.39.152,192.168.39.227
	W1007 12:33:53.945305  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.945333  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.945354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946191  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946469  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946569  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:33:53.946621  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	W1007 12:33:53.946704  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.946780  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.946900  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:33:53.946926  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.950981  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951020  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951403  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951437  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951491  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951686  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951876  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951902  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952038  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952066  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952209  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952204  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.952359  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:54.197386  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:33:54.205923  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:33:54.206059  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:33:54.226436  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:33:54.226467  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:33:54.226539  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:33:54.247190  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:33:54.263380  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:33:54.263461  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:33:54.280192  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:33:54.297621  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:33:54.421983  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:33:54.595012  766330 docker.go:233] disabling docker service ...
	I1007 12:33:54.595103  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:33:54.611124  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:33:54.625647  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:33:54.766528  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:33:54.902157  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:33:54.917030  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:33:54.939198  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:33:54.939275  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.951699  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:33:54.951792  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.963943  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.975263  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.986454  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:33:54.998449  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.010053  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.029064  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.040536  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:33:55.051384  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:33:55.051443  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:33:55.065668  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:33:55.076166  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:55.212352  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:33:55.312005  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:33:55.312090  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:33:55.318387  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:33:55.318471  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:33:55.322868  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:33:55.367251  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:33:55.367355  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.397971  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.435128  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:33:55.436490  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:33:55.437841  766330 out.go:177]   - env NO_PROXY=192.168.39.152,192.168.39.227
	I1007 12:33:55.439394  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:55.442218  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442572  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:55.442593  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442854  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:33:55.447427  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:55.460437  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:33:55.460787  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:55.461177  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.461238  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.477083  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1007 12:33:55.477627  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.478242  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.478264  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.478601  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.478770  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:33:55.480358  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:55.480665  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.480703  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.497617  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I1007 12:33:55.498208  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.498771  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.498802  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.499144  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.499349  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:55.499537  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.53
	I1007 12:33:55.499550  766330 certs.go:194] generating shared ca certs ...
	I1007 12:33:55.499567  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.499698  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:33:55.499751  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:33:55.499772  766330 certs.go:256] generating profile certs ...
	I1007 12:33:55.499874  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:33:55.499909  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23
	I1007 12:33:55.499931  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.53 192.168.39.254]
	I1007 12:33:55.566679  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 ...
	I1007 12:33:55.566718  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23: {Name:mk9518d7a648a9de4b8c05fe89f1c3f09f2c6a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.566929  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 ...
	I1007 12:33:55.566948  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23: {Name:mkdcb7e0de901ae74037605940d4a487b0fb8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.567053  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:33:55.567210  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:33:55.567369  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:33:55.567391  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:33:55.567411  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:33:55.567431  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:33:55.567450  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:33:55.567469  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:33:55.567488  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:33:55.567506  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:33:55.586158  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:33:55.586279  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:33:55.586335  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:33:55.586352  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:33:55.586387  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:33:55.586425  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:33:55.586458  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:33:55.586517  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:55.586558  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:33:55.586579  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:55.586598  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:33:55.586646  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:55.589684  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590162  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:55.590193  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590365  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:55.590577  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:55.590763  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:55.590948  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:55.666401  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:33:55.672290  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:33:55.685836  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:33:55.691589  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:33:55.704365  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:33:55.709554  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:33:55.723585  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:33:55.728967  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:33:55.742781  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:33:55.747517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:33:55.759055  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:33:55.763953  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:33:55.775294  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:33:55.802739  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:33:55.829606  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:33:55.854203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:33:55.881501  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:33:55.907802  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:33:55.935368  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:33:55.966709  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:33:55.993237  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:33:56.018616  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:33:56.044579  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:33:56.069120  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:33:56.087293  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:33:56.105801  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:33:56.126196  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:33:56.145822  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:33:56.163980  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:33:56.182187  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:33:56.201073  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:33:56.207142  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:33:56.218685  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.223978  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.224097  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.231835  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:33:56.243660  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:33:56.255269  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260456  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260520  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.267451  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:33:56.279865  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:33:56.291556  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296671  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296755  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.303021  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:33:56.314190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:33:56.319184  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:33:56.319253  766330 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1007 12:33:56.319359  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:33:56.319393  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:33:56.319441  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:33:56.337458  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:33:56.337539  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:33:56.337609  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.352182  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:33:56.352262  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:33:56.364932  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:33:56.365107  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.365108  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:56.364948  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:33:56.365318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.365380  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.386729  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:33:56.386794  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:33:56.386811  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:33:56.386844  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:33:56.386813  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.387110  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.420143  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:33:56.420202  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:33:57.371744  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:33:57.382647  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 12:33:57.402832  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:33:57.421823  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:33:57.441482  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:33:57.445627  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:57.459762  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:57.603405  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:57.624431  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:57.624969  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:57.625051  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:57.641787  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I1007 12:33:57.642353  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:57.642903  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:57.642925  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:57.643307  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:57.643533  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:57.643693  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:33:57.643829  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:33:57.643846  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:57.646962  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647481  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:57.647512  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647651  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:57.647823  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:57.647983  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:57.648106  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:57.973692  766330 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:57.973754  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1007 12:34:20.692568  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.718770843s)
	I1007 12:34:20.692609  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:34:21.235276  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m03 minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:34:21.384823  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:34:21.546452  766330 start.go:319] duration metric: took 23.902751753s to joinCluster
	I1007 12:34:21.546537  766330 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:34:21.547030  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:34:21.548080  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:34:21.549612  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:34:21.823190  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:34:21.845870  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:34:21.846263  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:34:21.846360  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:34:21.846701  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:21.846820  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:21.846832  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:21.846844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:21.846854  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:21.850883  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:22.347874  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.347909  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.347923  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.347929  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.351566  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:22.847344  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.847377  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.866723  766330 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1007 12:34:23.347347  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.347375  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.347387  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.347394  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.351929  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:23.847333  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.847355  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.847363  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.847372  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.850896  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:23.851597  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:24.347594  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.347622  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.347633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.347638  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.351365  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:24.847338  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.847389  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.850525  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.347474  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.347501  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.347512  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.347517  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.350876  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.847008  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.847039  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.847047  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.847052  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.850192  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.347863  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.347891  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.347899  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.347903  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.351555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.352073  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:26.847450  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.847477  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.847485  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.847489  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.851359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.347145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.347169  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.347179  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.347185  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.350867  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.847674  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.847701  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.847710  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.847715  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.851381  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.346976  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.347004  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.347016  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.347020  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.350677  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.847299  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.847324  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.847334  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.847342  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.852124  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:28.852851  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:29.347470  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.347495  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.347506  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.347511  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.351169  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:29.847063  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.847088  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.847096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.847101  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.850541  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:30.347314  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.347341  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.347349  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.347354  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.351677  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:30.847295  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.847322  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.847331  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.847337  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.851021  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.347887  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.347917  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.347928  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.347932  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.351855  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.352449  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:31.847880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.847906  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.847914  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.847918  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.851368  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.347251  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.347285  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.347297  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.347304  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.351028  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.847346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.847371  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.847380  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.847385  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.850808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.347425  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.347452  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.347461  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.347465  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.351213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.847937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.847961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.847976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.847981  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.852995  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:33.853973  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:34.347964  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.347989  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.348006  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.348012  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.351982  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:34.847651  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.847676  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.847685  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.847690  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.851757  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.347354  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.347377  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.347386  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.347390  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.351104  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.847711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.847737  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.847748  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.847753  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.858606  766330 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:34:35.859308  766330 node_ready.go:49] node "ha-053933-m03" has status "Ready":"True"
	I1007 12:34:35.859333  766330 node_ready.go:38] duration metric: took 14.012608332s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:35.859345  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:35.859442  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:35.859456  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.859468  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.859474  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.869218  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:34:35.877082  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.877211  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:34:35.877225  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.877235  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.877246  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.881909  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.883332  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.883357  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.883368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.883378  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.888505  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:35.889562  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.889584  766330 pod_ready.go:82] duration metric: took 12.462204ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889599  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889693  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:34:35.889703  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.889714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.889720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.894158  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.894859  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.894878  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.894888  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.894894  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.898314  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.898768  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.898786  766330 pod_ready.go:82] duration metric: took 9.180577ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898799  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898867  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:34:35.898875  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.898882  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.898885  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.903049  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.903727  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.903743  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.903754  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.903761  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.906490  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.907003  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.907073  766330 pod_ready.go:82] duration metric: took 8.251291ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907112  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907213  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:34:35.907222  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.907230  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.907250  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.910128  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.910735  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:35.910749  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.910760  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.910767  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.914012  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.914767  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.914789  766330 pod_ready.go:82] duration metric: took 7.665567ms for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.914802  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:36.048508  766330 request.go:632] Waited for 133.622997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048575  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048580  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.048588  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.048592  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.052571  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.248730  766330 request.go:632] Waited for 195.373798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248827  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248836  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.248844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.248849  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.251932  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.448570  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.448595  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.448605  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.448610  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.452907  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:36.647847  766330 request.go:632] Waited for 194.331001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647936  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647943  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.647951  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.647956  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.651933  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.915705  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.915729  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.915738  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.915742  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.919213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.048315  766330 request.go:632] Waited for 128.338635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048400  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048408  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.048424  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.048429  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.051185  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:37.415988  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.416012  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.416021  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.416026  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.419983  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.448134  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.448158  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.448168  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.448175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.451453  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.915937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.915961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.915971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.915976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.920167  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:37.921049  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.921073  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.921086  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.921093  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.924604  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.925286  766330 pod_ready.go:93] pod "etcd-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:37.925306  766330 pod_ready.go:82] duration metric: took 2.010496086s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:37.925324  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.048769  766330 request.go:632] Waited for 123.357964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048854  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.048866  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.048882  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.052431  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.248516  766330 request.go:632] Waited for 195.362302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248623  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248634  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.248644  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.248651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.252242  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.252762  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.252784  766330 pod_ready.go:82] duration metric: took 327.452579ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.252797  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.447801  766330 request.go:632] Waited for 194.917273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447884  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447889  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.447897  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.447902  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.451491  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.648627  766330 request.go:632] Waited for 196.37134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648716  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.648722  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.648732  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.652823  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:38.653461  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.653480  766330 pod_ready.go:82] duration metric: took 400.67636ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.653490  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.848685  766330 request.go:632] Waited for 195.113793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848879  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.848893  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.848898  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.853139  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:39.048666  766330 request.go:632] Waited for 194.422198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048757  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048765  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.048773  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.048780  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.052403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.052899  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.052921  766330 pod_ready.go:82] duration metric: took 399.423284ms for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.052935  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.248381  766330 request.go:632] Waited for 195.347943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248463  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248470  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.248479  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.248532  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.252304  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.448654  766330 request.go:632] Waited for 195.421963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448774  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448781  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.448789  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.448794  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.452418  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.452966  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.452987  766330 pod_ready.go:82] duration metric: took 400.045067ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.452997  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.648075  766330 request.go:632] Waited for 195.002627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648177  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648188  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.648196  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.648203  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.651698  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.848035  766330 request.go:632] Waited for 195.367175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848150  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848170  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.848184  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.848192  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.851573  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.852402  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.852421  766330 pod_ready.go:82] duration metric: took 399.417648ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.852432  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.048539  766330 request.go:632] Waited for 196.032961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048627  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048633  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.048641  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.048647  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.052288  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.248694  766330 request.go:632] Waited for 195.442218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248809  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248819  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.248829  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.248839  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.252540  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.253313  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.253337  766330 pod_ready.go:82] duration metric: took 400.899295ms for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.253349  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.448782  766330 request.go:632] Waited for 195.339385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448860  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.448879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.448899  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.452366  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.648273  766330 request.go:632] Waited for 194.918691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648352  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.648361  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.648367  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.651885  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.652427  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.652452  766330 pod_ready.go:82] duration metric: took 399.095883ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.652465  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.848579  766330 request.go:632] Waited for 196.00042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848642  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848648  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.848657  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.848660  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.852403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.048483  766330 request.go:632] Waited for 195.416905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048561  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048566  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.048574  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.048582  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.052281  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.052757  766330 pod_ready.go:93] pod "kube-proxy-dqqj6" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.052775  766330 pod_ready.go:82] duration metric: took 400.298296ms for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.052785  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.247821  766330 request.go:632] Waited for 194.952122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247915  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247920  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.247942  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.247958  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.251753  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.447806  766330 request.go:632] Waited for 195.292745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447871  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447876  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.447883  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.447887  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.451374  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.452013  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.452035  766330 pod_ready.go:82] duration metric: took 399.242268ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.452048  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.648060  766330 request.go:632] Waited for 195.92136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648167  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.648176  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.648181  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.652281  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:41.848221  766330 request.go:632] Waited for 195.408754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848307  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848321  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.848329  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.848332  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.851502  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.852147  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.852173  766330 pod_ready.go:82] duration metric: took 400.115446ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.852186  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.048319  766330 request.go:632] Waited for 196.021861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048415  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048421  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.048429  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.048434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.051904  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.247954  766330 request.go:632] Waited for 195.30672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248042  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248048  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.248056  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.248060  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.251799  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.252357  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.252378  766330 pod_ready.go:82] duration metric: took 400.185892ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.252389  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.448570  766330 request.go:632] Waited for 196.083361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448644  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448649  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.448658  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.448665  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.452279  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.648464  766330 request.go:632] Waited for 195.372097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648558  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648567  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.648575  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.648587  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.651837  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.652442  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.652462  766330 pod_ready.go:82] duration metric: took 400.066938ms for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.652473  766330 pod_ready.go:39] duration metric: took 6.79311586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:42.652490  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:34:42.652549  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:34:42.669655  766330 api_server.go:72] duration metric: took 21.123075945s to wait for apiserver process to appear ...
	I1007 12:34:42.669686  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:34:42.669721  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:34:42.677436  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:34:42.677526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:34:42.677533  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.677545  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.677556  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.678540  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:34:42.678609  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:34:42.678628  766330 api_server.go:131] duration metric: took 8.935272ms to wait for apiserver health ...
	I1007 12:34:42.678643  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:34:42.848087  766330 request.go:632] Waited for 169.34722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848178  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848184  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.848192  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.848197  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.854471  766330 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:34:42.861098  766330 system_pods.go:59] 24 kube-system pods found
	I1007 12:34:42.861133  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:42.861137  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:42.861141  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:42.861145  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:42.861148  766330 system_pods.go:61] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:42.861151  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:42.861154  766330 system_pods.go:61] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:42.861157  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:42.861160  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:42.861163  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:42.861166  766330 system_pods.go:61] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:42.861170  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:42.861177  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:42.861180  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:42.861182  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:42.861185  766330 system_pods.go:61] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:42.861189  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:42.861191  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:42.861194  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:42.861197  766330 system_pods.go:61] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:42.861200  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:42.861203  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:42.861206  766330 system_pods.go:61] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:42.861212  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:42.861221  766330 system_pods.go:74] duration metric: took 182.569158ms to wait for pod list to return data ...
	I1007 12:34:42.861229  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:34:43.048753  766330 request.go:632] Waited for 187.419479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048837  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.048875  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.048879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.053383  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:43.053574  766330 default_sa.go:45] found service account: "default"
	I1007 12:34:43.053596  766330 default_sa.go:55] duration metric: took 192.357019ms for default service account to be created ...
	I1007 12:34:43.053609  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:34:43.248358  766330 request.go:632] Waited for 194.661822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248434  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248457  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.248468  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.248480  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.254368  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:43.261575  766330 system_pods.go:86] 24 kube-system pods found
	I1007 12:34:43.261611  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:43.261617  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:43.261621  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:43.261625  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:43.261628  766330 system_pods.go:89] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:43.261632  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:43.261636  766330 system_pods.go:89] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:43.261641  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:43.261646  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:43.261651  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:43.261656  766330 system_pods.go:89] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:43.261665  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:43.261670  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:43.261679  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:43.261684  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:43.261689  766330 system_pods.go:89] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:43.261704  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:43.261709  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:43.261713  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:43.261719  766330 system_pods.go:89] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:43.261722  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:43.261730  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:43.261736  766330 system_pods.go:89] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:43.261739  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:43.261746  766330 system_pods.go:126] duration metric: took 208.130933ms to wait for k8s-apps to be running ...
	I1007 12:34:43.261758  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:34:43.261819  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:34:43.278366  766330 system_svc.go:56] duration metric: took 16.59381ms WaitForService to wait for kubelet
	I1007 12:34:43.278406  766330 kubeadm.go:582] duration metric: took 21.731835186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:34:43.278428  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:34:43.447722  766330 request.go:632] Waited for 169.191028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447802  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447807  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.447815  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.447822  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.451521  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:43.453111  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453136  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453151  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453154  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453158  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453161  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453165  766330 node_conditions.go:105] duration metric: took 174.732727ms to run NodePressure ...
	I1007 12:34:43.453176  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:34:43.453200  766330 start.go:255] writing updated cluster config ...
	I1007 12:34:43.453638  766330 ssh_runner.go:195] Run: rm -f paused
	I1007 12:34:43.510074  766330 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:34:43.512318  766330 out.go:177] * Done! kubectl is now configured to use "ha-053933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.922993177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304709922969281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa795ae3-28af-4994-8053-154ae524280e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.923709992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc731b9a-ce61-4586-ace9-9a28c185c5ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.923806211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc731b9a-ce61-4586-ace9-9a28c185c5ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.924312423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc731b9a-ce61-4586-ace9-9a28c185c5ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.964887238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b90941e-c96b-47c3-aea1-924db6d22185 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.964964894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b90941e-c96b-47c3-aea1-924db6d22185 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.966801634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=810182c6-1caa-4e89-8ad3-d50e6ac0813f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.967245775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304709967219868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=810182c6-1caa-4e89-8ad3-d50e6ac0813f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.967966858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9055b550-e9ab-4632-8b40-3c86696e1bd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.968052213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9055b550-e9ab-4632-8b40-3c86696e1bd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:29 ha-053933 crio[664]: time="2024-10-07 12:38:29.968279101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9055b550-e9ab-4632-8b40-3c86696e1bd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.014367607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d98ae60c-9586-47e7-a58f-c2c4e416dcf5 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.014475595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d98ae60c-9586-47e7-a58f-c2c4e416dcf5 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.015770387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac3adf10-8c1a-440f-8788-3a5e0bb3e8a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.016188684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304710016164689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac3adf10-8c1a-440f-8788-3a5e0bb3e8a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.016710657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0f0759b-85f2-46f3-a3f9-da253e225e52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.016775144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0f0759b-85f2-46f3-a3f9-da253e225e52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.017011549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0f0759b-85f2-46f3-a3f9-da253e225e52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.057846649Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04b8d937-9ecc-4917-b470-a53e4cc49304 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.057934809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04b8d937-9ecc-4917-b470-a53e4cc49304 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.059084908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6000a342-fa73-41d7-93d3-377828f72183 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.059576979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304710059480682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6000a342-fa73-41d7-93d3-377828f72183 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.060099566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5b99128-af86-4734-9fd0-b2cb75a03230 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.060169667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5b99128-af86-4734-9fd0-b2cb75a03230 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:30 ha-053933 crio[664]: time="2024-10-07 12:38:30.060406090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5b99128-af86-4734-9fd0-b2cb75a03230 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ba824fcefba6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e189556a18c92       busybox-7dff88458-gx88f
	2867817e1f480       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   0d58c208fea1c       coredns-7c65d6cfc9-tqtzn
	35044c701c165       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   89c61a059649d       coredns-7c65d6cfc9-sj44v
	3da0371dd7287       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   8d79b5c178f5d       storage-provisioner
	65adc93f12fb7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1546c9281ca68       kindnet-4gmn6
	aea74cdff9eee       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   6bb33ce6417a6       kube-proxy-7bwxp
	e756202203ed3       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   0e8b4b3150e40       kube-vip-ha-053933
	f190ed8ea3a7d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   228ca0c55468f       kube-controller-manager-ha-053933
	096488f001092       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cd767df10cb41       kube-scheduler-ha-053933
	fe11729317aca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   90cea5dfb2e91       etcd-ha-053933
	a23f58b62cf7a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   706ba9f92d690       kube-apiserver-ha-053933
	
	
	==> coredns [2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4] <==
	[INFO] 10.244.1.2:56331 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237909s
	[INFO] 10.244.1.2:36489 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015207s
	[INFO] 10.244.2.2:39298 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129286s
	[INFO] 10.244.2.2:47065 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177192s
	[INFO] 10.244.2.2:34384 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120996s
	[INFO] 10.244.2.2:55346 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176087s
	[INFO] 10.244.0.4:46975 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114471s
	[INFO] 10.244.0.4:58945 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225792s
	[INFO] 10.244.0.4:43259 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067959s
	[INFO] 10.244.0.4:34928 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001509847s
	[INFO] 10.244.0.4:46991 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079782s
	[INFO] 10.244.0.4:59761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084499s
	[INFO] 10.244.1.2:49251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140128s
	[INFO] 10.244.1.2:33825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172303s
	[INFO] 10.244.2.2:58538 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185922s
	[INFO] 10.244.0.4:44359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137041s
	[INFO] 10.244.0.4:58301 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099102s
	[INFO] 10.244.1.2:36803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222211s
	[INFO] 10.244.1.2:41006 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207899s
	[INFO] 10.244.1.2:43041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129649s
	[INFO] 10.244.2.2:45405 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175032s
	[INFO] 10.244.2.2:36952 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143195s
	[INFO] 10.244.0.4:39376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106075s
	[INFO] 10.244.0.4:60091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121535s
	[INFO] 10.244.0.4:37488 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084395s
	
	
	==> coredns [35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5] <==
	[INFO] 10.244.2.2:33316 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000351738s
	[INFO] 10.244.2.2:40861 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001441898s
	[INFO] 10.244.0.4:57140 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000078781s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135026s
	[INFO] 10.244.1.2:54055 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005238284s
	[INFO] 10.244.1.2:56033 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000250432s
	[INFO] 10.244.1.2:35801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184148s
	[INFO] 10.244.1.2:59610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190826s
	[INFO] 10.244.2.2:33184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859772s
	[INFO] 10.244.2.2:46345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160195s
	[INFO] 10.244.2.2:58454 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001735681s
	[INFO] 10.244.2.2:51235 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213117s
	[INFO] 10.244.0.4:40361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002214882s
	[INFO] 10.244.0.4:35596 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091564s
	[INFO] 10.244.1.2:54454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176281s
	[INFO] 10.244.1.2:54571 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089015s
	[INFO] 10.244.2.2:54102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258038s
	[INFO] 10.244.2.2:51160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106978s
	[INFO] 10.244.2.2:57393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167598s
	[INFO] 10.244.0.4:39801 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084483s
	[INFO] 10.244.0.4:60729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097532s
	[INFO] 10.244.1.2:36580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164463s
	[INFO] 10.244.2.2:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036575s
	[INFO] 10.244.2.2:54375 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256014s
	[INFO] 10.244.0.4:46032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082269s
	
	
	==> describe nodes <==
	Name:               ha-053933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-053933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 081ddd3e0f204426846b528e120c10c6
	  System UUID:                081ddd3e-0f20-4426-846b-528e120c10c6
	  Boot ID:                    1dece28a-ef9e-423f-833d-5ccfd814e28e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gx88f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-7c65d6cfc9-sj44v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 coredns-7c65d6cfc9-tqtzn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 etcd-ha-053933                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m26s
	  kube-system                 kindnet-4gmn6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-053933             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-053933    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-7bwxp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-053933             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-053933                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s  kubelet          Node ha-053933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s  kubelet          Node ha-053933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s  kubelet          Node ha-053933 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m23s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  NodeReady                6m10s  kubelet          Node ha-053933 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	
	
	Name:               ha-053933-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:33:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:35:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-053933-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea0094a740a940c483867f94cc6c27db
	  System UUID:                ea0094a7-40a9-40c4-8386-7f94cc6c27db
	  Boot ID:                    c270f988-c787-4383-b26b-ec82a3153fd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cll72                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-053933-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-cx4hw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-053933-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-ha-053933-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-zvblz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-053933-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-vip-ha-053933-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m28s                  cidrAllocator    Node ha-053933-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-053933-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-053933-m02 status is now: NodeNotReady
	
	
	Name:               ha-053933-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-053933-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2c62335e69d4ef7b1309ece17e10873
	  System UUID:                c2c62335-e69d-4ef7-b130-9ece17e10873
	  Boot ID:                    2e17b6e0-0617-4bea-8b9d-8cd903a9fcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvw9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-053933-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-6tzch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-053933-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-053933-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-dqqj6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-053933-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-053933-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m13s                  cidrAllocator    Node ha-053933-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-053933-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	
	
	Name:               ha-053933-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_35_18_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-053933-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 114115be4a5e4a82bdbd4b86727c66b7
	  System UUID:                114115be-4a5e-4a82-bdbd-4b86727c66b7
	  Boot ID:                    dba1fc43-1911-4c9b-b57d-d3bef52a7eef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-874mt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-wmjjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m13s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m13s)  kubelet          Node ha-053933-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m13s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m12s                  cidrAllocator    Node ha-053933-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  NodeReady                2m55s                  kubelet          Node ha-053933-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050548] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040088] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.846047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.647512] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.009818] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056187] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087371] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186817] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.108690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.296967] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.247594] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.068909] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.901650] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.502104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 12:32] kauditd_printk_skb: 51 callbacks suppressed
	[  +1.286659] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +5.238921] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.342023] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 12:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866] <==
	{"level":"warn","ts":"2024-10-07T12:38:30.368464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.375497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.378427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.380333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.383687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.390480Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.405029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.409698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.413969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.415117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.423306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.431347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.438391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.442314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.446490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.447967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.454112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.460675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.467038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.471727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.475451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.480026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.480141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.487737Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:30.495036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:38:30 up 7 min,  0 users,  load average: 0.15, 0.17, 0.08
	Linux ha-053933 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c] <==
	I1007 12:37:50.810872       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:00.814625       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:00.814833       1 main.go:299] handling current node
	I1007 12:38:00.814970       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:00.814985       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:00.815723       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:00.815798       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:00.815998       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:00.816057       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:10.808104       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:10.808153       1 main.go:299] handling current node
	I1007 12:38:10.808168       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:10.808173       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:10.808359       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:10.808385       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:10.808430       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:10.808435       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:20.812716       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:20.812802       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:20.812961       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:20.812985       1 main.go:299] handling current node
	I1007 12:38:20.813004       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:20.813010       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:20.813053       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:20.813073       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38] <==
	I1007 12:32:02.949969       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1007 12:32:02.963249       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I1007 12:32:02.964729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:32:02.971941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:32:03.069138       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 12:32:03.964342       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 12:32:03.987254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:32:04.095813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:32:08.516111       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1007 12:32:08.611991       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1007 12:34:48.798901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37568: use of closed network connection
	E1007 12:34:49.000124       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37592: use of closed network connection
	E1007 12:34:49.206162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37608: use of closed network connection
	E1007 12:34:49.419763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37632: use of closed network connection
	E1007 12:34:49.618246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37650: use of closed network connection
	E1007 12:34:49.830698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37678: use of closed network connection
	E1007 12:34:50.014306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37698: use of closed network connection
	E1007 12:34:50.203031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37722: use of closed network connection
	E1007 12:34:50.399836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37736: use of closed network connection
	E1007 12:34:50.721906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37754: use of closed network connection
	E1007 12:34:50.916874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37778: use of closed network connection
	E1007 12:34:51.129244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37784: use of closed network connection
	E1007 12:34:51.331880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37804: use of closed network connection
	E1007 12:34:51.534234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37816: use of closed network connection
	E1007 12:34:51.740225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37836: use of closed network connection
	
	
	==> kube-controller-manager [f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255] <==
	E1007 12:35:18.261020       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-053933-m04': failed to patch node CIDR: Node \"ha-053933-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1007 12:35:18.261043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.267395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.419356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.886255       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.927634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m03"
	I1007 12:35:21.910317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.213570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.317164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.867893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.869105       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-053933-m04"
	I1007 12:35:22.944595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:28.233385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.043630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.044602       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:35:36.061944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.755307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:48.386926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:36:37.247180       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:36:37.247992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.283173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.296003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.649837ms"
	I1007 12:36:37.296097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.311µs"
	I1007 12:36:37.968993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:42.526972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	
	
	==> kube-proxy [aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:32:09.744772       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:32:09.779605       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	E1007 12:32:09.779729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:32:09.875780       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:32:09.875870       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:32:09.875896       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:32:09.899096       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:32:09.900043       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:32:09.900063       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:32:09.904977       1 config.go:199] "Starting service config controller"
	I1007 12:32:09.905625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:32:09.905998       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:32:09.906007       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:32:09.909098       1 config.go:328] "Starting node config controller"
	I1007 12:32:09.912651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:32:10.006461       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:32:10.006556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:32:10.013752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525] <==
	W1007 12:32:02.522045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:32:02.522209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:32:02.691725       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:32:02.691861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 12:32:04.967169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 12:35:18.155212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.155405       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 055fbe2f-0b88-4875-9ee5-5672731cf7e9(kube-system/kindnet-tskmj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tskmj"
	E1007 12:35:18.155442       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-tskmj"
	I1007 12:35:18.155464       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.234037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.235784       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17a817ae-69ea-44f0-907d-a935057c340a(kube-system/kube-proxy-hkx4p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hkx4p"
	E1007 12:35:18.235899       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-hkx4p"
	I1007 12:35:18.235923       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.234494       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.237640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fe0255b5-5ad9-4633-a28d-ecdf64a0267c(kube-system/kindnet-gbqh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gbqh5"
	E1007 12:35:18.237709       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-gbqh5"
	I1007 12:35:18.237727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.300436       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300714       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71fc4648-ffa7-4b9c-b3be-35c98da41798(kube-system/kube-proxy-wmjjq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wmjjq"
	E1007 12:35:18.300906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-wmjjq"
	I1007 12:35:18.301040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	E1007 12:35:18.302463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cbe2af3e-e15d-4855-b598-450159e1b100(kube-system/kindnet-874mt) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-874mt"
	E1007 12:35:18.302498       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-874mt"
	I1007 12:35:18.302596       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	
	
	==> kubelet <==
	Oct 07 12:37:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:37:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248076    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248142    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250603    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250995    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252717    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252763    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.255287    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.257649    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.260273    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.261117    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264814    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264871    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.151993    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266021    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266073    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267592    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267615    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:24 ha-053933 kubelet[1318]: E1007 12:38:24.271756    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304704271343356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:24 ha-053933 kubelet[1318]: E1007 12:38:24.271782    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304704271343356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-053933 -n ha-053933
helpers_test.go:261: (dbg) Run:  kubectl --context ha-053933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.916349631s)
ha_test.go:309: expected profile "ha-053933" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-053933\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-053933\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-053933\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.152\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.227\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.53\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.244\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"m
etallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":
262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-053933 -n ha-053933
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 logs -n 25: (1.448694179s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m03_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m04 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp testdata/cp-test.txt                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m03 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-053933 node stop m02 -v=7                                                   | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-053933 node start m02 -v=7                                                  | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:31:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:31:18.148064  766330 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:31:18.148178  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148182  766330 out.go:358] Setting ErrFile to fd 2...
	I1007 12:31:18.148187  766330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:18.148357  766330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:31:18.148967  766330 out.go:352] Setting JSON to false
	I1007 12:31:18.149958  766330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8027,"bootTime":1728296251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:31:18.150102  766330 start.go:139] virtualization: kvm guest
	I1007 12:31:18.152485  766330 out.go:177] * [ha-053933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:31:18.154248  766330 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:31:18.154296  766330 notify.go:220] Checking for updates...
	I1007 12:31:18.157253  766330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:31:18.159046  766330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:31:18.160370  766330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.161706  766330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:31:18.163112  766330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:31:18.164841  766330 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:31:18.202110  766330 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:31:18.203531  766330 start.go:297] selected driver: kvm2
	I1007 12:31:18.203547  766330 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:31:18.203562  766330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:31:18.204518  766330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.204603  766330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:31:18.220705  766330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:31:18.220766  766330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:31:18.221021  766330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:31:18.221059  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:18.221106  766330 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:31:18.221116  766330 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:31:18.221169  766330 start.go:340] cluster config:
	{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:18.221279  766330 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:31:18.223403  766330 out.go:177] * Starting "ha-053933" primary control-plane node in "ha-053933" cluster
	I1007 12:31:18.224688  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:18.224749  766330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:31:18.224761  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:31:18.224844  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:31:18.224857  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:31:18.225188  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:18.225228  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json: {Name:mk42211822a040c72189a8c96b9ffb20916f09bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:18.225385  766330 start.go:360] acquireMachinesLock for ha-053933: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:31:18.225414  766330 start.go:364] duration metric: took 16.211µs to acquireMachinesLock for "ha-053933"
	I1007 12:31:18.225431  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:31:18.225482  766330 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:31:18.227000  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:31:18.227165  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:18.227217  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:18.241971  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1007 12:31:18.242468  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:18.243060  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:31:18.243086  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:18.243440  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:18.243664  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:18.243802  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:18.243958  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:31:18.243992  766330 client.go:168] LocalClient.Create starting
	I1007 12:31:18.244024  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:31:18.244058  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244073  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244137  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:31:18.244157  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:31:18.244173  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:31:18.244190  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:31:18.244198  766330 main.go:141] libmachine: (ha-053933) Calling .PreCreateCheck
	I1007 12:31:18.244526  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:18.244944  766330 main.go:141] libmachine: Creating machine...
	I1007 12:31:18.244959  766330 main.go:141] libmachine: (ha-053933) Calling .Create
	I1007 12:31:18.245125  766330 main.go:141] libmachine: (ha-053933) Creating KVM machine...
	I1007 12:31:18.246330  766330 main.go:141] libmachine: (ha-053933) DBG | found existing default KVM network
	I1007 12:31:18.247162  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.246970  766353 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1007 12:31:18.247250  766330 main.go:141] libmachine: (ha-053933) DBG | created network xml: 
	I1007 12:31:18.247277  766330 main.go:141] libmachine: (ha-053933) DBG | <network>
	I1007 12:31:18.247291  766330 main.go:141] libmachine: (ha-053933) DBG |   <name>mk-ha-053933</name>
	I1007 12:31:18.247307  766330 main.go:141] libmachine: (ha-053933) DBG |   <dns enable='no'/>
	I1007 12:31:18.247318  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247331  766330 main.go:141] libmachine: (ha-053933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:31:18.247341  766330 main.go:141] libmachine: (ha-053933) DBG |     <dhcp>
	I1007 12:31:18.247353  766330 main.go:141] libmachine: (ha-053933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:31:18.247366  766330 main.go:141] libmachine: (ha-053933) DBG |     </dhcp>
	I1007 12:31:18.247382  766330 main.go:141] libmachine: (ha-053933) DBG |   </ip>
	I1007 12:31:18.247394  766330 main.go:141] libmachine: (ha-053933) DBG |   
	I1007 12:31:18.247403  766330 main.go:141] libmachine: (ha-053933) DBG | </network>
	I1007 12:31:18.247414  766330 main.go:141] libmachine: (ha-053933) DBG | 
	I1007 12:31:18.252550  766330 main.go:141] libmachine: (ha-053933) DBG | trying to create private KVM network mk-ha-053933 192.168.39.0/24...
	I1007 12:31:18.323012  766330 main.go:141] libmachine: (ha-053933) DBG | private KVM network mk-ha-053933 192.168.39.0/24 created
	I1007 12:31:18.323051  766330 main.go:141] libmachine: (ha-053933) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.323065  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.322988  766353 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.323078  766330 main.go:141] libmachine: (ha-053933) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:31:18.323220  766330 main.go:141] libmachine: (ha-053933) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:31:18.600250  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.600066  766353 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa...
	I1007 12:31:18.865018  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864813  766353 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk...
	I1007 12:31:18.865057  766330 main.go:141] libmachine: (ha-053933) DBG | Writing magic tar header
	I1007 12:31:18.865071  766330 main.go:141] libmachine: (ha-053933) DBG | Writing SSH key tar header
	I1007 12:31:18.865083  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:18.864941  766353 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 ...
	I1007 12:31:18.865103  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933
	I1007 12:31:18.865116  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933 (perms=drwx------)
	I1007 12:31:18.865126  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:31:18.865135  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:18.865141  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:31:18.865149  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:31:18.865159  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:31:18.865166  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:31:18.865180  766330 main.go:141] libmachine: (ha-053933) DBG | Checking permissions on dir: /home
	I1007 12:31:18.865192  766330 main.go:141] libmachine: (ha-053933) DBG | Skipping /home - not owner
	I1007 12:31:18.865206  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:31:18.865221  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:31:18.865229  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:31:18.865238  766330 main.go:141] libmachine: (ha-053933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:31:18.865245  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:18.866439  766330 main.go:141] libmachine: (ha-053933) define libvirt domain using xml: 
	I1007 12:31:18.866466  766330 main.go:141] libmachine: (ha-053933) <domain type='kvm'>
	I1007 12:31:18.866476  766330 main.go:141] libmachine: (ha-053933)   <name>ha-053933</name>
	I1007 12:31:18.866483  766330 main.go:141] libmachine: (ha-053933)   <memory unit='MiB'>2200</memory>
	I1007 12:31:18.866492  766330 main.go:141] libmachine: (ha-053933)   <vcpu>2</vcpu>
	I1007 12:31:18.866503  766330 main.go:141] libmachine: (ha-053933)   <features>
	I1007 12:31:18.866510  766330 main.go:141] libmachine: (ha-053933)     <acpi/>
	I1007 12:31:18.866520  766330 main.go:141] libmachine: (ha-053933)     <apic/>
	I1007 12:31:18.866530  766330 main.go:141] libmachine: (ha-053933)     <pae/>
	I1007 12:31:18.866546  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866569  766330 main.go:141] libmachine: (ha-053933)   </features>
	I1007 12:31:18.866589  766330 main.go:141] libmachine: (ha-053933)   <cpu mode='host-passthrough'>
	I1007 12:31:18.866598  766330 main.go:141] libmachine: (ha-053933)   
	I1007 12:31:18.866607  766330 main.go:141] libmachine: (ha-053933)   </cpu>
	I1007 12:31:18.866617  766330 main.go:141] libmachine: (ha-053933)   <os>
	I1007 12:31:18.866624  766330 main.go:141] libmachine: (ha-053933)     <type>hvm</type>
	I1007 12:31:18.866630  766330 main.go:141] libmachine: (ha-053933)     <boot dev='cdrom'/>
	I1007 12:31:18.866636  766330 main.go:141] libmachine: (ha-053933)     <boot dev='hd'/>
	I1007 12:31:18.866641  766330 main.go:141] libmachine: (ha-053933)     <bootmenu enable='no'/>
	I1007 12:31:18.866647  766330 main.go:141] libmachine: (ha-053933)   </os>
	I1007 12:31:18.866652  766330 main.go:141] libmachine: (ha-053933)   <devices>
	I1007 12:31:18.866659  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='cdrom'>
	I1007 12:31:18.866666  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/boot2docker.iso'/>
	I1007 12:31:18.866673  766330 main.go:141] libmachine: (ha-053933)       <target dev='hdc' bus='scsi'/>
	I1007 12:31:18.866678  766330 main.go:141] libmachine: (ha-053933)       <readonly/>
	I1007 12:31:18.866683  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866691  766330 main.go:141] libmachine: (ha-053933)     <disk type='file' device='disk'>
	I1007 12:31:18.866702  766330 main.go:141] libmachine: (ha-053933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:31:18.866711  766330 main.go:141] libmachine: (ha-053933)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/ha-053933.rawdisk'/>
	I1007 12:31:18.866722  766330 main.go:141] libmachine: (ha-053933)       <target dev='hda' bus='virtio'/>
	I1007 12:31:18.866731  766330 main.go:141] libmachine: (ha-053933)     </disk>
	I1007 12:31:18.866737  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866745  766330 main.go:141] libmachine: (ha-053933)       <source network='mk-ha-053933'/>
	I1007 12:31:18.866749  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866755  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866759  766330 main.go:141] libmachine: (ha-053933)     <interface type='network'>
	I1007 12:31:18.866768  766330 main.go:141] libmachine: (ha-053933)       <source network='default'/>
	I1007 12:31:18.866775  766330 main.go:141] libmachine: (ha-053933)       <model type='virtio'/>
	I1007 12:31:18.866780  766330 main.go:141] libmachine: (ha-053933)     </interface>
	I1007 12:31:18.866786  766330 main.go:141] libmachine: (ha-053933)     <serial type='pty'>
	I1007 12:31:18.866791  766330 main.go:141] libmachine: (ha-053933)       <target port='0'/>
	I1007 12:31:18.866798  766330 main.go:141] libmachine: (ha-053933)     </serial>
	I1007 12:31:18.866802  766330 main.go:141] libmachine: (ha-053933)     <console type='pty'>
	I1007 12:31:18.866810  766330 main.go:141] libmachine: (ha-053933)       <target type='serial' port='0'/>
	I1007 12:31:18.866821  766330 main.go:141] libmachine: (ha-053933)     </console>
	I1007 12:31:18.866827  766330 main.go:141] libmachine: (ha-053933)     <rng model='virtio'>
	I1007 12:31:18.866834  766330 main.go:141] libmachine: (ha-053933)       <backend model='random'>/dev/random</backend>
	I1007 12:31:18.866840  766330 main.go:141] libmachine: (ha-053933)     </rng>
	I1007 12:31:18.866844  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866850  766330 main.go:141] libmachine: (ha-053933)     
	I1007 12:31:18.866855  766330 main.go:141] libmachine: (ha-053933)   </devices>
	I1007 12:31:18.866860  766330 main.go:141] libmachine: (ha-053933) </domain>
	I1007 12:31:18.866868  766330 main.go:141] libmachine: (ha-053933) 
	I1007 12:31:18.871598  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:91:b8:36 in network default
	I1007 12:31:18.872268  766330 main.go:141] libmachine: (ha-053933) Ensuring networks are active...
	I1007 12:31:18.872288  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:18.873069  766330 main.go:141] libmachine: (ha-053933) Ensuring network default is active
	I1007 12:31:18.873363  766330 main.go:141] libmachine: (ha-053933) Ensuring network mk-ha-053933 is active
	I1007 12:31:18.873853  766330 main.go:141] libmachine: (ha-053933) Getting domain xml...
	I1007 12:31:18.874562  766330 main.go:141] libmachine: (ha-053933) Creating domain...
	I1007 12:31:19.211616  766330 main.go:141] libmachine: (ha-053933) Waiting to get IP...
	I1007 12:31:19.212423  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.212778  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.212812  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.212764  766353 retry.go:31] will retry after 226.747121ms: waiting for machine to come up
	I1007 12:31:19.441331  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.441786  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.441837  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.441730  766353 retry.go:31] will retry after 274.527206ms: waiting for machine to come up
	I1007 12:31:19.718508  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:19.719027  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:19.719064  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:19.718969  766353 retry.go:31] will retry after 356.880394ms: waiting for machine to come up
	I1007 12:31:20.077626  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.078112  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.078145  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.078091  766353 retry.go:31] will retry after 415.686035ms: waiting for machine to come up
	I1007 12:31:20.495868  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:20.496297  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:20.496328  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:20.496232  766353 retry.go:31] will retry after 565.036299ms: waiting for machine to come up
	I1007 12:31:21.062533  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.063181  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.063212  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.063112  766353 retry.go:31] will retry after 934.304139ms: waiting for machine to come up
	I1007 12:31:21.999277  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:21.999729  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:21.999763  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:21.999684  766353 retry.go:31] will retry after 862.178533ms: waiting for machine to come up
	I1007 12:31:22.863123  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:22.863626  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:22.863658  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:22.863574  766353 retry.go:31] will retry after 1.201609733s: waiting for machine to come up
	I1007 12:31:24.066671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:24.067072  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:24.067104  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:24.067015  766353 retry.go:31] will retry after 1.419758916s: waiting for machine to come up
	I1007 12:31:25.488770  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:25.489216  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:25.489240  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:25.489182  766353 retry.go:31] will retry after 2.248635623s: waiting for machine to come up
	I1007 12:31:27.740776  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:27.741277  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:27.741301  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:27.741240  766353 retry.go:31] will retry after 1.919055927s: waiting for machine to come up
	I1007 12:31:29.662363  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:29.662857  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:29.663141  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:29.662878  766353 retry.go:31] will retry after 3.284332028s: waiting for machine to come up
	I1007 12:31:32.951614  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:32.952006  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:32.952134  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:32.951952  766353 retry.go:31] will retry after 3.413281695s: waiting for machine to come up
	I1007 12:31:36.369285  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:36.369674  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find current IP address of domain ha-053933 in network mk-ha-053933
	I1007 12:31:36.369704  766330 main.go:141] libmachine: (ha-053933) DBG | I1007 12:31:36.369624  766353 retry.go:31] will retry after 5.240968669s: waiting for machine to come up
	I1007 12:31:41.615028  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615539  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has current primary IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.615555  766330 main.go:141] libmachine: (ha-053933) Found IP for machine: 192.168.39.152
	I1007 12:31:41.615563  766330 main.go:141] libmachine: (ha-053933) Reserving static IP address...
	I1007 12:31:41.615914  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "ha-053933", mac: "52:54:00:7e:91:1b", ip: "192.168.39.152"} in network mk-ha-053933
	I1007 12:31:41.698423  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:41.698453  766330 main.go:141] libmachine: (ha-053933) Reserved static IP address: 192.168.39.152
	I1007 12:31:41.698466  766330 main.go:141] libmachine: (ha-053933) Waiting for SSH to be available...
	I1007 12:31:41.701233  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:41.701575  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933
	I1007 12:31:41.701604  766330 main.go:141] libmachine: (ha-053933) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:7e:91:1b
	I1007 12:31:41.701733  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:41.701762  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:41.701811  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:41.701844  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:41.701865  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:41.705812  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:31:41.705841  766330 main.go:141] libmachine: (ha-053933) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:31:41.705848  766330 main.go:141] libmachine: (ha-053933) DBG | command : exit 0
	I1007 12:31:41.705853  766330 main.go:141] libmachine: (ha-053933) DBG | err     : exit status 255
	I1007 12:31:41.705861  766330 main.go:141] libmachine: (ha-053933) DBG | output  : 
	I1007 12:31:44.706593  766330 main.go:141] libmachine: (ha-053933) DBG | Getting to WaitForSSH function...
	I1007 12:31:44.709072  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709617  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.709649  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.709785  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH client type: external
	I1007 12:31:44.709814  766330 main.go:141] libmachine: (ha-053933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa (-rw-------)
	I1007 12:31:44.709843  766330 main.go:141] libmachine: (ha-053933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:31:44.709856  766330 main.go:141] libmachine: (ha-053933) DBG | About to run SSH command:
	I1007 12:31:44.709871  766330 main.go:141] libmachine: (ha-053933) DBG | exit 0
	I1007 12:31:44.834399  766330 main.go:141] libmachine: (ha-053933) DBG | SSH cmd err, output: <nil>: 
	I1007 12:31:44.834682  766330 main.go:141] libmachine: (ha-053933) KVM machine creation complete!
	I1007 12:31:44.834978  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:44.835619  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.835838  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:44.836043  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:31:44.836062  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:31:44.837184  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:31:44.837198  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:31:44.837203  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:31:44.837209  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.839398  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839807  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.839830  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.839939  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.840108  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840281  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.840429  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.840654  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.840918  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.840931  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:31:44.945582  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:44.945632  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:31:44.945644  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:44.948258  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948719  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:44.948754  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:44.948921  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:44.949136  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949341  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:44.949504  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:44.949690  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:44.949946  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:44.949963  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:31:45.055227  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:31:45.055350  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:31:45.055364  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:31:45.055378  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055638  766330 buildroot.go:166] provisioning hostname "ha-053933"
	I1007 12:31:45.055680  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.055865  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.058671  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059121  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.059156  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.059299  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.059582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059753  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.059896  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.060046  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.060230  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.060242  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933 && echo "ha-053933" | sudo tee /etc/hostname
	I1007 12:31:45.177180  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:31:45.177214  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.180205  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180610  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.180640  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.180887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.181104  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181275  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.181434  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.181657  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.181837  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.181854  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:31:45.296167  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:31:45.296213  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:31:45.296262  766330 buildroot.go:174] setting up certificates
	I1007 12:31:45.296275  766330 provision.go:84] configureAuth start
	I1007 12:31:45.296287  766330 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:31:45.296598  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.299370  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299721  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.299769  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.299887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.302528  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.302981  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.303013  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.303173  766330 provision.go:143] copyHostCerts
	I1007 12:31:45.303222  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303263  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:31:45.303285  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:31:45.303361  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:31:45.303500  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303523  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:31:45.303528  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:31:45.303559  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:31:45.303616  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303633  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:31:45.303637  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:31:45.303657  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:31:45.303708  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933 san=[127.0.0.1 192.168.39.152 ha-053933 localhost minikube]
	I1007 12:31:45.422772  766330 provision.go:177] copyRemoteCerts
	I1007 12:31:45.422847  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:31:45.422884  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.426109  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426432  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.426461  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.426620  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.426796  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.426987  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.427121  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.508256  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:31:45.508354  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:31:45.535023  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:31:45.535097  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:31:45.561047  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:31:45.561146  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:31:45.586470  766330 provision.go:87] duration metric: took 290.178076ms to configureAuth
	I1007 12:31:45.586509  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:31:45.586752  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:45.586838  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.589503  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.589873  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.589917  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.590215  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.590402  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590554  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.590703  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.590899  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.591142  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.591160  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:31:45.816081  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:31:45.816125  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:31:45.816137  766330 main.go:141] libmachine: (ha-053933) Calling .GetURL
	I1007 12:31:45.817540  766330 main.go:141] libmachine: (ha-053933) DBG | Using libvirt version 6000000
	I1007 12:31:45.820289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820694  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.820725  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.820851  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:31:45.820871  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:31:45.820882  766330 client.go:171] duration metric: took 27.576881663s to LocalClient.Create
	I1007 12:31:45.820914  766330 start.go:167] duration metric: took 27.57695761s to libmachine.API.Create "ha-053933"
	I1007 12:31:45.820939  766330 start.go:293] postStartSetup for "ha-053933" (driver="kvm2")
	I1007 12:31:45.820955  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:31:45.820986  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:45.821218  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:31:45.821261  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.823471  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.823791  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.823834  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.824015  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.824234  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.824403  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.824535  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:45.905405  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:31:45.910330  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:31:45.910363  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:31:45.910424  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:31:45.910498  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:31:45.910509  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:31:45.910617  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:31:45.921262  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:45.947335  766330 start.go:296] duration metric: took 126.377039ms for postStartSetup
	I1007 12:31:45.947395  766330 main.go:141] libmachine: (ha-053933) Calling .GetConfigRaw
	I1007 12:31:45.948057  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:45.950566  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.950901  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.950931  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.951158  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:31:45.951337  766330 start.go:128] duration metric: took 27.725842508s to createHost
	I1007 12:31:45.951369  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:45.953682  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954057  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:45.954084  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:45.954210  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:45.954414  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954585  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:45.954727  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:45.954891  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:31:45.955077  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:31:45.955089  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:31:46.059048  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304306.039624942
	
	I1007 12:31:46.059075  766330 fix.go:216] guest clock: 1728304306.039624942
	I1007 12:31:46.059083  766330 fix.go:229] Guest: 2024-10-07 12:31:46.039624942 +0000 UTC Remote: 2024-10-07 12:31:45.951349706 +0000 UTC m=+27.845880248 (delta=88.275236ms)
	I1007 12:31:46.059106  766330 fix.go:200] guest clock delta is within tolerance: 88.275236ms
	I1007 12:31:46.059111  766330 start.go:83] releasing machines lock for "ha-053933", held for 27.833688154s
	I1007 12:31:46.059131  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.059394  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:46.062064  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062406  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.062431  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.062578  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063106  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063318  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:31:46.063436  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:31:46.063484  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.063563  766330 ssh_runner.go:195] Run: cat /version.json
	I1007 12:31:46.063582  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:31:46.066118  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066393  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066431  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066454  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066641  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066729  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:46.066762  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:46.066811  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.066931  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:31:46.066955  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067124  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:31:46.067115  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.067267  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:31:46.067400  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:31:46.143506  766330 ssh_runner.go:195] Run: systemctl --version
	I1007 12:31:46.170858  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:31:46.332209  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:31:46.338580  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:31:46.338677  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:31:46.356826  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:31:46.356863  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:31:46.356954  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:31:46.374524  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:31:46.390007  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:31:46.390089  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:31:46.404935  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:31:46.420186  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:31:46.537561  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:31:46.724537  766330 docker.go:233] disabling docker service ...
	I1007 12:31:46.724631  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:31:46.740520  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:31:46.754710  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:31:46.868070  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:31:46.983211  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:31:46.998357  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:31:47.018646  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:31:47.018734  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.030677  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:31:47.030766  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.042531  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.053856  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.065763  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:31:47.077170  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.088459  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.106901  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:31:47.118161  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:31:47.128388  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:31:47.128462  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:31:47.142126  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:31:47.154515  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:47.283963  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:31:47.385321  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:31:47.385405  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:31:47.390485  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:31:47.390552  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:31:47.394825  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:31:47.439074  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:31:47.439187  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.469132  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:31:47.501636  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:31:47.503367  766330 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:31:47.506449  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.506817  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:31:47.506859  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:31:47.507082  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:31:47.511597  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:47.525698  766330 kubeadm.go:883] updating cluster {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:31:47.525829  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:31:47.525874  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:47.561011  766330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:31:47.561094  766330 ssh_runner.go:195] Run: which lz4
	I1007 12:31:47.565196  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:31:47.565316  766330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:31:47.569571  766330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:31:47.569613  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:31:49.022834  766330 crio.go:462] duration metric: took 1.457534476s to copy over tarball
	I1007 12:31:49.022945  766330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:31:51.131868  766330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108889496s)
	I1007 12:31:51.131914  766330 crio.go:469] duration metric: took 2.109034387s to extract the tarball
	I1007 12:31:51.131926  766330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:31:51.169816  766330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:31:51.217403  766330 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:31:51.217431  766330 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:31:51.217440  766330 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.31.1 crio true true} ...
	I1007 12:31:51.217556  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:31:51.217655  766330 ssh_runner.go:195] Run: crio config
	I1007 12:31:51.271379  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:31:51.271408  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:31:51.271420  766330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:31:51.271445  766330 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-053933 NodeName:ha-053933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:31:51.271623  766330 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-053933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:31:51.271654  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:31:51.271699  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:31:51.289463  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:31:51.289607  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:31:51.289677  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:31:51.300325  766330 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:31:51.300403  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:31:51.311044  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:31:51.329552  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:31:51.347746  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:31:51.366188  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:31:51.384590  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:31:51.388865  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:31:51.402571  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:31:51.531092  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:31:51.550538  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.152
	I1007 12:31:51.550568  766330 certs.go:194] generating shared ca certs ...
	I1007 12:31:51.550589  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.550791  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:31:51.550844  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:31:51.550855  766330 certs.go:256] generating profile certs ...
	I1007 12:31:51.550949  766330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:31:51.550971  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt with IP's: []
	I1007 12:31:51.873489  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt ...
	I1007 12:31:51.873532  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt: {Name:mkf7b8a7f4d9827c14fd0fbc8bb02e2f79d65528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873758  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key ...
	I1007 12:31:51.873776  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key: {Name:mk6b5a827040be723c18ebdcd9fe7d1599565bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:51.873894  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a
	I1007 12:31:51.873912  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.254]
	I1007 12:31:52.061549  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a ...
	I1007 12:31:52.061587  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a: {Name:mk1a012d659f1c8c4afc92ca485eba408eb37a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061787  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a ...
	I1007 12:31:52.061804  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a: {Name:mkb1195bd1ddd6ea78076dea0e840887aeae92ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.061908  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:31:52.062012  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.0208bc6a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:31:52.062107  766330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:31:52.062125  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt with IP's: []
	I1007 12:31:52.119663  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt ...
	I1007 12:31:52.119698  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt: {Name:mkf6d674dcac47b878e8df13383f77bcf932d249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.119900  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key ...
	I1007 12:31:52.119913  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key: {Name:mk301510b9dc1296a9e7f127da3f0d4b86905808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:31:52.120033  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:31:52.120053  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:31:52.120064  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:31:52.120077  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:31:52.120087  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:31:52.120118  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:31:52.120142  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:31:52.120155  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:31:52.120209  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:31:52.120251  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:31:52.120261  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:31:52.120290  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:31:52.120312  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:31:52.120339  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:31:52.120379  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:31:52.120408  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.120422  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.120434  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.121128  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:31:52.149003  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:31:52.175017  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:31:52.201648  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:31:52.228352  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:31:52.255290  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:31:52.282215  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:31:52.309286  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:31:52.337694  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:31:52.366883  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:31:52.402754  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:31:52.430306  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:31:52.451397  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:31:52.458450  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:31:52.470676  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476879  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.476941  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:31:52.483560  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:31:52.495531  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:31:52.507273  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512685  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.512760  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:31:52.519035  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:31:52.530701  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:31:52.542163  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547093  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.547169  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:31:52.553420  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:31:52.565081  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:31:52.569549  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:31:52.569630  766330 kubeadm.go:392] StartCluster: {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:52.569737  766330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:31:52.569800  766330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:31:52.613192  766330 cri.go:89] found id: ""
	I1007 12:31:52.613311  766330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:31:52.625713  766330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:31:52.636220  766330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:31:52.646590  766330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:31:52.646626  766330 kubeadm.go:157] found existing configuration files:
	
	I1007 12:31:52.646686  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:31:52.656870  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:31:52.656944  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:31:52.667467  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:31:52.677109  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:31:52.677186  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:31:52.687168  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.696969  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:31:52.697035  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:31:52.706604  766330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:31:52.716252  766330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:31:52.716325  766330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:31:52.726572  766330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:31:52.847487  766330 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:31:52.847581  766330 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:31:52.955260  766330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:31:52.955420  766330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:31:52.955545  766330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:31:52.964537  766330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:31:53.051755  766330 out.go:235]   - Generating certificates and keys ...
	I1007 12:31:53.051938  766330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:31:53.052035  766330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:31:53.320791  766330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:31:53.468201  766330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:31:53.842801  766330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:31:53.969642  766330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:31:54.101242  766330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:31:54.101440  766330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.456134  766330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:31:54.456354  766330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-053933 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I1007 12:31:54.521797  766330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:31:54.769778  766330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:31:55.125227  766330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:31:55.125448  766330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:31:55.361551  766330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:31:55.783698  766330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:31:56.057409  766330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:31:56.211507  766330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:31:56.348279  766330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:31:56.349002  766330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:31:56.353525  766330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:31:56.355620  766330 out.go:235]   - Booting up control plane ...
	I1007 12:31:56.355760  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:31:56.356147  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:31:56.356974  766330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:31:56.373175  766330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:31:56.381538  766330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:31:56.381594  766330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:31:56.521323  766330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:31:56.521511  766330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:31:57.022943  766330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.739695ms
	I1007 12:31:57.023054  766330 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:32:03.058810  766330 kubeadm.go:310] [api-check] The API server is healthy after 6.037121779s
	I1007 12:32:03.072819  766330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:32:03.101026  766330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:32:03.645977  766330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:32:03.646231  766330 kubeadm.go:310] [mark-control-plane] Marking the node ha-053933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:32:03.661217  766330 kubeadm.go:310] [bootstrap-token] Using token: ofkgus.681l1bfefmhh1xkb
	I1007 12:32:03.662957  766330 out.go:235]   - Configuring RBAC rules ...
	I1007 12:32:03.663116  766330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:32:03.674911  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:32:03.697863  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:32:03.703512  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:32:03.708092  766330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:32:03.713563  766330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:32:03.734636  766330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:32:03.997011  766330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:32:04.464216  766330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:32:04.465131  766330 kubeadm.go:310] 
	I1007 12:32:04.465191  766330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:32:04.465199  766330 kubeadm.go:310] 
	I1007 12:32:04.465336  766330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:32:04.465360  766330 kubeadm.go:310] 
	I1007 12:32:04.465394  766330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:32:04.465446  766330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:32:04.465491  766330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:32:04.465504  766330 kubeadm.go:310] 
	I1007 12:32:04.465572  766330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:32:04.465599  766330 kubeadm.go:310] 
	I1007 12:32:04.465644  766330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:32:04.465663  766330 kubeadm.go:310] 
	I1007 12:32:04.465719  766330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:32:04.465794  766330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:32:04.465885  766330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:32:04.465901  766330 kubeadm.go:310] 
	I1007 12:32:04.466075  766330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:32:04.466193  766330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:32:04.466201  766330 kubeadm.go:310] 
	I1007 12:32:04.466294  766330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466394  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 12:32:04.466415  766330 kubeadm.go:310] 	--control-plane 
	I1007 12:32:04.466421  766330 kubeadm.go:310] 
	I1007 12:32:04.466490  766330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:32:04.466497  766330 kubeadm.go:310] 
	I1007 12:32:04.466565  766330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ofkgus.681l1bfefmhh1xkb \
	I1007 12:32:04.466661  766330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 12:32:04.467760  766330 kubeadm.go:310] W1007 12:31:52.830915     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468039  766330 kubeadm.go:310] W1007 12:31:52.831996     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:32:04.468166  766330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:32:04.468194  766330 cni.go:84] Creating CNI manager for ""
	I1007 12:32:04.468205  766330 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:32:04.470298  766330 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:32:04.471574  766330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:32:04.477802  766330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:32:04.477826  766330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:32:04.497072  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:32:04.906135  766330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:32:04.906201  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:04.906237  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933 minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=true
	I1007 12:32:05.063682  766330 ops.go:34] apiserver oom_adj: -16
	I1007 12:32:05.063698  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:05.564187  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.063920  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:06.563953  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.064483  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:07.564765  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.064739  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:08.564036  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.063899  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:32:09.198443  766330 kubeadm.go:1113] duration metric: took 4.292302963s to wait for elevateKubeSystemPrivileges
	I1007 12:32:09.198484  766330 kubeadm.go:394] duration metric: took 16.62887336s to StartCluster
	I1007 12:32:09.198511  766330 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.198603  766330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.199399  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:09.199661  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:32:09.199654  766330 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:09.199683  766330 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:32:09.199750  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:32:09.199769  766330 addons.go:69] Setting storage-provisioner=true in profile "ha-053933"
	I1007 12:32:09.199790  766330 addons.go:234] Setting addon storage-provisioner=true in "ha-053933"
	I1007 12:32:09.199789  766330 addons.go:69] Setting default-storageclass=true in profile "ha-053933"
	I1007 12:32:09.199827  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.199861  766330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-053933"
	I1007 12:32:09.199924  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:09.200250  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200297  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.200379  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.200403  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.217502  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I1007 12:32:09.217554  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I1007 12:32:09.217985  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218145  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.218593  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218622  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.218725  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.218753  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.219006  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219124  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.219326  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.219637  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.219691  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.221998  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:32:09.222368  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:32:09.223019  766330 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:32:09.223381  766330 addons.go:234] Setting addon default-storageclass=true in "ha-053933"
	I1007 12:32:09.223435  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:09.223846  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.223902  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.237604  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I1007 12:32:09.238161  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.238820  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.238847  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.239267  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.239621  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.242388  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.242754  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I1007 12:32:09.243274  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.243977  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.244007  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.244396  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.244986  766330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:32:09.245068  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:09.245147  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:09.246976  766330 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.247004  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:32:09.247031  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.251289  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.251823  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.251851  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.252064  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.252294  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.252448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.252580  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.263439  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1007 12:32:09.263833  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:09.264713  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:09.264733  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:09.265269  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:09.265519  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:09.267198  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:09.267411  766330 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:09.267431  766330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:32:09.267448  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:09.271160  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.271638  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:09.271652  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:09.272078  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:09.272247  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:09.272388  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:09.272476  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:09.422833  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:32:09.443940  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:32:09.510999  766330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:32:10.102670  766330 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:32:10.350678  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350704  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.350784  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.350815  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351026  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351046  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351056  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351063  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.351128  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.351191  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.351222  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.351239  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.351246  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.352633  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352653  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352669  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.352691  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.352714  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.352813  766330 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:32:10.352834  766330 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:32:10.352951  766330 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:32:10.352963  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.352974  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.352984  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.364518  766330 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:32:10.365197  766330 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:32:10.365213  766330 round_trippers.go:469] Request Headers:
	I1007 12:32:10.365222  766330 round_trippers.go:473]     Content-Type: application/json
	I1007 12:32:10.365226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:32:10.365229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:32:10.368346  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:32:10.368537  766330 main.go:141] libmachine: Making call to close driver server
	I1007 12:32:10.368555  766330 main.go:141] libmachine: (ha-053933) Calling .Close
	I1007 12:32:10.368875  766330 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:32:10.368889  766330 main.go:141] libmachine: (ha-053933) DBG | Closing plugin on server side
	I1007 12:32:10.368895  766330 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:32:10.371604  766330 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:32:10.373030  766330 addons.go:510] duration metric: took 1.173351959s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:32:10.373068  766330 start.go:246] waiting for cluster config update ...
	I1007 12:32:10.373085  766330 start.go:255] writing updated cluster config ...
	I1007 12:32:10.375098  766330 out.go:201] 
	I1007 12:32:10.377249  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:10.377439  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.379490  766330 out.go:177] * Starting "ha-053933-m02" control-plane node in "ha-053933" cluster
	I1007 12:32:10.381087  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:32:10.381130  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:32:10.381324  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:32:10.381339  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:32:10.381436  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:10.381664  766330 start.go:360] acquireMachinesLock for ha-053933-m02: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:32:10.381718  766330 start.go:364] duration metric: took 27.543µs to acquireMachinesLock for "ha-053933-m02"
	I1007 12:32:10.381752  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:10.381840  766330 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:32:10.383550  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:32:10.383680  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:10.383748  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:10.399329  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1007 12:32:10.399900  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:10.400460  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:10.400489  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:10.400855  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:10.401087  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:10.401325  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:10.401564  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:32:10.401597  766330 client.go:168] LocalClient.Create starting
	I1007 12:32:10.401634  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:32:10.401683  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401708  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401774  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:32:10.401806  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:32:10.401824  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:32:10.401883  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:32:10.401911  766330 main.go:141] libmachine: (ha-053933-m02) Calling .PreCreateCheck
	I1007 12:32:10.402163  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:10.402584  766330 main.go:141] libmachine: Creating machine...
	I1007 12:32:10.402602  766330 main.go:141] libmachine: (ha-053933-m02) Calling .Create
	I1007 12:32:10.402815  766330 main.go:141] libmachine: (ha-053933-m02) Creating KVM machine...
	I1007 12:32:10.404630  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing default KVM network
	I1007 12:32:10.404848  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found existing private KVM network mk-ha-053933
	I1007 12:32:10.405187  766330 main.go:141] libmachine: (ha-053933-m02) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.405209  766330 main.go:141] libmachine: (ha-053933-m02) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:32:10.405302  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.405168  766716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.405466  766330 main.go:141] libmachine: (ha-053933-m02) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:32:10.686269  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.686123  766716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa...
	I1007 12:32:10.953304  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953079  766716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk...
	I1007 12:32:10.953335  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing magic tar header
	I1007 12:32:10.953347  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Writing SSH key tar header
	I1007 12:32:10.953354  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:10.953302  766716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 ...
	I1007 12:32:10.953491  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02
	I1007 12:32:10.953520  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02 (perms=drwx------)
	I1007 12:32:10.953532  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:32:10.953546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:32:10.953559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:32:10.953567  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:32:10.953577  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:32:10.953583  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:32:10.953594  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:32:10.953602  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:32:10.953610  766330 main.go:141] libmachine: (ha-053933-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:32:10.953626  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:10.953639  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:32:10.953649  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Checking permissions on dir: /home
	I1007 12:32:10.953661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Skipping /home - not owner
	I1007 12:32:10.954892  766330 main.go:141] libmachine: (ha-053933-m02) define libvirt domain using xml: 
	I1007 12:32:10.954919  766330 main.go:141] libmachine: (ha-053933-m02) <domain type='kvm'>
	I1007 12:32:10.954926  766330 main.go:141] libmachine: (ha-053933-m02)   <name>ha-053933-m02</name>
	I1007 12:32:10.954934  766330 main.go:141] libmachine: (ha-053933-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:32:10.954971  766330 main.go:141] libmachine: (ha-053933-m02)   <vcpu>2</vcpu>
	I1007 12:32:10.954998  766330 main.go:141] libmachine: (ha-053933-m02)   <features>
	I1007 12:32:10.955008  766330 main.go:141] libmachine: (ha-053933-m02)     <acpi/>
	I1007 12:32:10.955019  766330 main.go:141] libmachine: (ha-053933-m02)     <apic/>
	I1007 12:32:10.955028  766330 main.go:141] libmachine: (ha-053933-m02)     <pae/>
	I1007 12:32:10.955038  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955048  766330 main.go:141] libmachine: (ha-053933-m02)   </features>
	I1007 12:32:10.955059  766330 main.go:141] libmachine: (ha-053933-m02)   <cpu mode='host-passthrough'>
	I1007 12:32:10.955086  766330 main.go:141] libmachine: (ha-053933-m02)   
	I1007 12:32:10.955107  766330 main.go:141] libmachine: (ha-053933-m02)   </cpu>
	I1007 12:32:10.955118  766330 main.go:141] libmachine: (ha-053933-m02)   <os>
	I1007 12:32:10.955130  766330 main.go:141] libmachine: (ha-053933-m02)     <type>hvm</type>
	I1007 12:32:10.955144  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='cdrom'/>
	I1007 12:32:10.955153  766330 main.go:141] libmachine: (ha-053933-m02)     <boot dev='hd'/>
	I1007 12:32:10.955164  766330 main.go:141] libmachine: (ha-053933-m02)     <bootmenu enable='no'/>
	I1007 12:32:10.955170  766330 main.go:141] libmachine: (ha-053933-m02)   </os>
	I1007 12:32:10.955176  766330 main.go:141] libmachine: (ha-053933-m02)   <devices>
	I1007 12:32:10.955183  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='cdrom'>
	I1007 12:32:10.955199  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/boot2docker.iso'/>
	I1007 12:32:10.955214  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:32:10.955226  766330 main.go:141] libmachine: (ha-053933-m02)       <readonly/>
	I1007 12:32:10.955236  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955247  766330 main.go:141] libmachine: (ha-053933-m02)     <disk type='file' device='disk'>
	I1007 12:32:10.955259  766330 main.go:141] libmachine: (ha-053933-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:32:10.955273  766330 main.go:141] libmachine: (ha-053933-m02)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/ha-053933-m02.rawdisk'/>
	I1007 12:32:10.955284  766330 main.go:141] libmachine: (ha-053933-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:32:10.955295  766330 main.go:141] libmachine: (ha-053933-m02)     </disk>
	I1007 12:32:10.955317  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955337  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='mk-ha-053933'/>
	I1007 12:32:10.955355  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955372  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955385  766330 main.go:141] libmachine: (ha-053933-m02)     <interface type='network'>
	I1007 12:32:10.955397  766330 main.go:141] libmachine: (ha-053933-m02)       <source network='default'/>
	I1007 12:32:10.955410  766330 main.go:141] libmachine: (ha-053933-m02)       <model type='virtio'/>
	I1007 12:32:10.955419  766330 main.go:141] libmachine: (ha-053933-m02)     </interface>
	I1007 12:32:10.955429  766330 main.go:141] libmachine: (ha-053933-m02)     <serial type='pty'>
	I1007 12:32:10.955444  766330 main.go:141] libmachine: (ha-053933-m02)       <target port='0'/>
	I1007 12:32:10.955456  766330 main.go:141] libmachine: (ha-053933-m02)     </serial>
	I1007 12:32:10.955483  766330 main.go:141] libmachine: (ha-053933-m02)     <console type='pty'>
	I1007 12:32:10.955500  766330 main.go:141] libmachine: (ha-053933-m02)       <target type='serial' port='0'/>
	I1007 12:32:10.955516  766330 main.go:141] libmachine: (ha-053933-m02)     </console>
	I1007 12:32:10.955528  766330 main.go:141] libmachine: (ha-053933-m02)     <rng model='virtio'>
	I1007 12:32:10.955541  766330 main.go:141] libmachine: (ha-053933-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:32:10.955552  766330 main.go:141] libmachine: (ha-053933-m02)     </rng>
	I1007 12:32:10.955562  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955574  766330 main.go:141] libmachine: (ha-053933-m02)     
	I1007 12:32:10.955588  766330 main.go:141] libmachine: (ha-053933-m02)   </devices>
	I1007 12:32:10.955599  766330 main.go:141] libmachine: (ha-053933-m02) </domain>
	I1007 12:32:10.955606  766330 main.go:141] libmachine: (ha-053933-m02) 
	I1007 12:32:10.964084  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:92:85:a0 in network default
	I1007 12:32:10.964913  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring networks are active...
	I1007 12:32:10.964943  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:10.966004  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network default is active
	I1007 12:32:10.966794  766330 main.go:141] libmachine: (ha-053933-m02) Ensuring network mk-ha-053933 is active
	I1007 12:32:10.967567  766330 main.go:141] libmachine: (ha-053933-m02) Getting domain xml...
	I1007 12:32:10.968704  766330 main.go:141] libmachine: (ha-053933-m02) Creating domain...
	I1007 12:32:11.328435  766330 main.go:141] libmachine: (ha-053933-m02) Waiting to get IP...
	I1007 12:32:11.329255  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.329657  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.329684  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.329635  766716 retry.go:31] will retry after 304.626046ms: waiting for machine to come up
	I1007 12:32:11.636452  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.636889  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.636919  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.636838  766716 retry.go:31] will retry after 276.587443ms: waiting for machine to come up
	I1007 12:32:11.915507  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:11.915953  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:11.915981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:11.915913  766716 retry.go:31] will retry after 337.132979ms: waiting for machine to come up
	I1007 12:32:12.254562  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.255006  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.255031  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.254957  766716 retry.go:31] will retry after 414.173139ms: waiting for machine to come up
	I1007 12:32:12.670554  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:12.670981  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:12.671027  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:12.670964  766716 retry.go:31] will retry after 736.75735ms: waiting for machine to come up
	I1007 12:32:13.409001  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:13.409465  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:13.409492  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:13.409419  766716 retry.go:31] will retry after 877.012423ms: waiting for machine to come up
	I1007 12:32:14.288329  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:14.288723  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:14.288753  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:14.288684  766716 retry.go:31] will retry after 1.037556164s: waiting for machine to come up
	I1007 12:32:15.327401  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:15.327809  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:15.327836  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:15.327768  766716 retry.go:31] will retry after 1.075590546s: waiting for machine to come up
	I1007 12:32:16.404635  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:16.405141  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:16.405170  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:16.405088  766716 retry.go:31] will retry after 1.694642723s: waiting for machine to come up
	I1007 12:32:18.101812  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:18.102290  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:18.102307  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:18.102257  766716 retry.go:31] will retry after 2.246296895s: waiting for machine to come up
	I1007 12:32:20.351742  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:20.352251  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:20.352273  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:20.352201  766716 retry.go:31] will retry after 2.298110151s: waiting for machine to come up
	I1007 12:32:22.653604  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:22.654280  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:22.654305  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:22.654158  766716 retry.go:31] will retry after 3.347094149s: waiting for machine to come up
	I1007 12:32:26.003104  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:26.003592  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:26.003618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:26.003545  766716 retry.go:31] will retry after 3.946300567s: waiting for machine to come up
	I1007 12:32:29.951184  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:29.951661  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find current IP address of domain ha-053933-m02 in network mk-ha-053933
	I1007 12:32:29.951683  766330 main.go:141] libmachine: (ha-053933-m02) DBG | I1007 12:32:29.951615  766716 retry.go:31] will retry after 4.942604939s: waiting for machine to come up
	I1007 12:32:34.900038  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900804  766330 main.go:141] libmachine: (ha-053933-m02) Found IP for machine: 192.168.39.227
	I1007 12:32:34.900839  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has current primary IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.900847  766330 main.go:141] libmachine: (ha-053933-m02) Reserving static IP address...
	I1007 12:32:34.901345  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "ha-053933-m02", mac: "52:54:00:e8:71:ec", ip: "192.168.39.227"} in network mk-ha-053933
	I1007 12:32:34.989559  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:34.989593  766330 main.go:141] libmachine: (ha-053933-m02) Reserved static IP address: 192.168.39.227
	I1007 12:32:34.989607  766330 main.go:141] libmachine: (ha-053933-m02) Waiting for SSH to be available...
	I1007 12:32:34.993000  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:34.993348  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933
	I1007 12:32:34.993372  766330 main.go:141] libmachine: (ha-053933-m02) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:e8:71:ec
	I1007 12:32:34.993535  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:34.993565  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:34.993595  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:34.993608  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:34.993625  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:34.997438  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:32:34.997462  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:32:34.997471  766330 main.go:141] libmachine: (ha-053933-m02) DBG | command : exit 0
	I1007 12:32:34.997493  766330 main.go:141] libmachine: (ha-053933-m02) DBG | err     : exit status 255
	I1007 12:32:34.997502  766330 main.go:141] libmachine: (ha-053933-m02) DBG | output  : 
	I1007 12:32:38.000138  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Getting to WaitForSSH function...
	I1007 12:32:38.003563  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.003934  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.003965  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.004068  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH client type: external
	I1007 12:32:38.004097  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa (-rw-------)
	I1007 12:32:38.004133  766330 main.go:141] libmachine: (ha-053933-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:32:38.004156  766330 main.go:141] libmachine: (ha-053933-m02) DBG | About to run SSH command:
	I1007 12:32:38.004198  766330 main.go:141] libmachine: (ha-053933-m02) DBG | exit 0
	I1007 12:32:38.134356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:32:38.134575  766330 main.go:141] libmachine: (ha-053933-m02) KVM machine creation complete!
	I1007 12:32:38.134919  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:38.135497  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135718  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:38.135838  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:32:38.135854  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetState
	I1007 12:32:38.137125  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:32:38.137139  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:32:38.137144  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:32:38.137149  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.139531  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140008  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.140029  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.140173  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.140353  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140459  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.140609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.140739  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.140945  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.140955  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:32:38.245844  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.245874  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:32:38.245883  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.249067  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249461  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.249482  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.249773  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.249996  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250184  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.250363  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.250493  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.250691  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.250704  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:32:38.363524  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:32:38.363625  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:32:38.363640  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:32:38.363656  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364053  766330 buildroot.go:166] provisioning hostname "ha-053933-m02"
	I1007 12:32:38.364084  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.364321  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.367546  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368073  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.368107  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.368323  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.368535  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368704  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.368874  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.369073  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.369311  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.369326  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m02 && echo "ha-053933-m02" | sudo tee /etc/hostname
	I1007 12:32:38.493958  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m02
	
	I1007 12:32:38.493990  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.496774  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497161  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.497193  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.497352  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.497571  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497746  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.497916  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.498140  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.498312  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.498329  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:32:38.616208  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:32:38.616246  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:32:38.616266  766330 buildroot.go:174] setting up certificates
	I1007 12:32:38.616276  766330 provision.go:84] configureAuth start
	I1007 12:32:38.616286  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetMachineName
	I1007 12:32:38.616609  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:38.619075  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619398  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.619427  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.619572  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.621757  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622105  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.622129  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.622285  766330 provision.go:143] copyHostCerts
	I1007 12:32:38.622318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622352  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:32:38.622361  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:32:38.622432  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:32:38.622511  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622529  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:32:38.622535  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:32:38.622558  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:32:38.622599  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622622  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:32:38.622630  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:32:38.622663  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:32:38.622733  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m02 san=[127.0.0.1 192.168.39.227 ha-053933-m02 localhost minikube]
	I1007 12:32:38.708452  766330 provision.go:177] copyRemoteCerts
	I1007 12:32:38.708528  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:32:38.708564  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.710962  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711285  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.711318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.711472  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.711655  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.711820  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.711918  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:38.799093  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:32:38.799174  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:32:38.827105  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:32:38.827188  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:32:38.854871  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:32:38.854953  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:32:38.882148  766330 provision.go:87] duration metric: took 265.856123ms to configureAuth
	I1007 12:32:38.882180  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:32:38.882387  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:38.882485  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:38.885151  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885511  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:38.885545  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:38.885761  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:38.885978  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886151  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:38.886344  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:38.886506  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:38.886695  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:38.886715  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:32:39.128135  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:32:39.128167  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:32:39.128176  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetURL
	I1007 12:32:39.129618  766330 main.go:141] libmachine: (ha-053933-m02) DBG | Using libvirt version 6000000
	I1007 12:32:39.132019  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132387  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.132415  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.132625  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:32:39.132640  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:32:39.132647  766330 client.go:171] duration metric: took 28.73104158s to LocalClient.Create
	I1007 12:32:39.132672  766330 start.go:167] duration metric: took 28.731111532s to libmachine.API.Create "ha-053933"
	I1007 12:32:39.132682  766330 start.go:293] postStartSetup for "ha-053933-m02" (driver="kvm2")
	I1007 12:32:39.132692  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:32:39.132710  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.132980  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:32:39.133017  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.135744  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136124  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.136167  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.136341  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.136530  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.136675  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.136835  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.221605  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:32:39.226484  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:32:39.226514  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:32:39.226584  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:32:39.226655  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:32:39.226665  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:32:39.226746  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:32:39.237427  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:39.261998  766330 start.go:296] duration metric: took 129.301228ms for postStartSetup
	I1007 12:32:39.262093  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetConfigRaw
	I1007 12:32:39.262719  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.265384  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.265792  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.265819  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.266155  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:32:39.266397  766330 start.go:128] duration metric: took 28.884542194s to createHost
	I1007 12:32:39.266428  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.268718  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.268995  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.269035  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.269138  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.269298  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269463  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.269575  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.269703  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:32:39.269878  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1007 12:32:39.269888  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:32:39.379504  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304359.360836408
	
	I1007 12:32:39.379530  766330 fix.go:216] guest clock: 1728304359.360836408
	I1007 12:32:39.379539  766330 fix.go:229] Guest: 2024-10-07 12:32:39.360836408 +0000 UTC Remote: 2024-10-07 12:32:39.26641087 +0000 UTC m=+81.160941412 (delta=94.425538ms)
	I1007 12:32:39.379557  766330 fix.go:200] guest clock delta is within tolerance: 94.425538ms
	I1007 12:32:39.379562  766330 start.go:83] releasing machines lock for "ha-053933-m02", held for 28.997822917s
	I1007 12:32:39.379579  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.379889  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:39.383410  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.383763  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.383796  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.386874  766330 out.go:177] * Found network options:
	I1007 12:32:39.388989  766330 out.go:177]   - NO_PROXY=192.168.39.152
	W1007 12:32:39.390421  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.390479  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391270  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391484  766330 main.go:141] libmachine: (ha-053933-m02) Calling .DriverName
	I1007 12:32:39.391605  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:32:39.391666  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	W1007 12:32:39.391801  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:32:39.391871  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:32:39.391887  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHHostname
	I1007 12:32:39.394867  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.394901  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395284  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395318  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:39.395339  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395356  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:39.395674  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395681  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHPort
	I1007 12:32:39.395918  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.395928  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHKeyPath
	I1007 12:32:39.396088  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396100  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetSSHUsername
	I1007 12:32:39.396238  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.396245  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m02/id_rsa Username:docker}
	I1007 12:32:39.642441  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:32:39.649674  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:32:39.649767  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:32:39.666653  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:32:39.666687  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:32:39.666767  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:32:39.684589  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:32:39.700168  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:32:39.700231  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:32:39.716005  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:32:39.731764  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:32:39.862714  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:32:40.011007  766330 docker.go:233] disabling docker service ...
	I1007 12:32:40.011096  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:32:40.027322  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:32:40.041607  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:32:40.187585  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:32:40.331438  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:32:40.347382  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:32:40.367495  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:32:40.367556  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.379748  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:32:40.379840  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.391760  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.403745  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.415505  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:32:40.428366  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.441667  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.460916  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:32:40.473748  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:32:40.485573  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:32:40.485645  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:32:40.500703  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:32:40.512028  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:40.646960  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:32:40.739246  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:32:40.739338  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:32:40.744292  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:32:40.744359  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:32:40.748439  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:32:40.790232  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:32:40.790320  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.827829  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:32:40.860461  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:32:40.862462  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:32:40.864274  766330 main.go:141] libmachine: (ha-053933-m02) Calling .GetIP
	I1007 12:32:40.867846  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868296  766330 main.go:141] libmachine: (ha-053933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:71:ec", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:32:24 +0000 UTC Type:0 Mac:52:54:00:e8:71:ec Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-053933-m02 Clientid:01:52:54:00:e8:71:ec}
	I1007 12:32:40.868323  766330 main.go:141] libmachine: (ha-053933-m02) DBG | domain ha-053933-m02 has defined IP address 192.168.39.227 and MAC address 52:54:00:e8:71:ec in network mk-ha-053933
	I1007 12:32:40.868742  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:32:40.873673  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:40.887367  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:32:40.887606  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:32:40.887888  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.887931  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.903464  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I1007 12:32:40.903898  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.904410  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.904433  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.904903  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.905134  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:32:40.906904  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:40.907228  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:40.907278  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:40.922960  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I1007 12:32:40.923502  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:40.924055  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:40.924078  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:40.924407  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:40.924586  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:40.924737  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.227
	I1007 12:32:40.924756  766330 certs.go:194] generating shared ca certs ...
	I1007 12:32:40.924778  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:40.924946  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:32:40.925010  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:32:40.925020  766330 certs.go:256] generating profile certs ...
	I1007 12:32:40.925169  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:32:40.925208  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90
	I1007 12:32:40.925226  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.254]
	I1007 12:32:41.148971  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 ...
	I1007 12:32:41.149006  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90: {Name:mkfc72ac98e5f64b1efa978f83502cc26e6b00b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149188  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 ...
	I1007 12:32:41.149202  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90: {Name:mkb6d827b308c96cc8f5173b1a5723adff201a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:32:41.149277  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:32:41.149418  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.54b8ff90 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:32:41.149564  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:32:41.149589  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:32:41.149603  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:32:41.149618  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:32:41.149632  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:32:41.149645  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:32:41.149658  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:32:41.149670  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:32:41.149681  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:32:41.149730  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:32:41.149764  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:32:41.149774  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:32:41.149801  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:32:41.149822  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:32:41.149848  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:32:41.149885  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:32:41.149911  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.149925  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.149937  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.149971  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:41.153293  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153635  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:41.153659  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:41.153887  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:41.154192  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:41.154376  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:41.154520  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:41.226577  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:32:41.232730  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:32:41.245060  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:32:41.251197  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:32:41.264593  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:32:41.269517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:32:41.281560  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:32:41.286754  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:32:41.299707  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:32:41.304594  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:32:41.317916  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:32:41.323393  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:32:41.336013  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:32:41.366179  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:32:41.393458  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:32:41.419874  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:32:41.447814  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:32:41.474678  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:32:41.500522  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:32:41.527411  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:32:41.552513  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:32:41.576732  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:32:41.602701  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:32:41.628143  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:32:41.644998  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:32:41.662248  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:32:41.679785  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:32:41.698239  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:32:41.717010  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:32:41.735412  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:32:41.753557  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:32:41.759787  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:32:41.771601  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776332  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.776414  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:32:41.782579  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:32:41.793992  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:32:41.806293  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811220  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.811296  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:32:41.817656  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:32:41.829292  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:32:41.840880  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845905  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.845988  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:32:41.852343  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:32:41.864190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:32:41.868675  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:32:41.868747  766330 kubeadm.go:934] updating node {m02 192.168.39.227 8443 v1.31.1 crio true true} ...
	I1007 12:32:41.868844  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:32:41.868868  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:32:41.868905  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:32:41.889715  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:32:41.889813  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:32:41.889876  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.901277  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:32:41.901344  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:32:41.911928  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:32:41.911964  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912020  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:32:41.912066  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:32:41.912079  766330 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:32:41.917061  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:32:41.917099  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:32:42.483195  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.483287  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:32:42.490132  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:32:42.490184  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:32:42.569436  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:32:42.620637  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.620740  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:32:42.635485  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:32:42.635527  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:32:43.157634  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:32:43.168142  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:32:43.185353  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:32:43.203562  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:32:43.222930  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:32:43.227330  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:32:43.240979  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:32:43.377709  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:32:43.396837  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:32:43.397301  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:32:43.397366  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:32:43.414130  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1007 12:32:43.414696  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:32:43.415312  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:32:43.415338  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:32:43.415686  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:32:43.415901  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:32:43.416102  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:32:43.416222  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:32:43.416248  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:32:43.419194  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419695  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:32:43.419728  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:32:43.419860  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:32:43.420045  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:32:43.420225  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:32:43.420387  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:32:43.569631  766330 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:32:43.569697  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I1007 12:33:05.382098  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zrjle4.kmlkks5psv59wr5u --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m02 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (21.812371374s)
	I1007 12:33:05.382136  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:33:05.983459  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m02 minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:33:06.136889  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:33:06.286153  766330 start.go:319] duration metric: took 22.870046293s to joinCluster
	I1007 12:33:06.286246  766330 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:06.286558  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:06.288312  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:33:06.290220  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:06.583421  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:06.686534  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:33:06.686755  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:33:06.686819  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:33:06.687163  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:06.687340  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:06.687357  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:06.687368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:06.687373  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:06.711245  766330 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1007 12:33:07.188212  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.188242  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.188255  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.188274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.191359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:07.688452  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:07.688484  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:07.688497  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:07.688502  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:07.808189  766330 round_trippers.go:574] Response Status: 200 OK in 119 milliseconds
	I1007 12:33:08.187451  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.187480  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.187491  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.187496  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.191935  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:08.687677  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:08.687701  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:08.687711  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:08.687719  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:08.690915  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:08.691670  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:09.188237  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.188270  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.188281  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.188289  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.194158  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:09.687515  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:09.687547  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:09.687557  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:09.687562  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:09.690808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.188360  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.188385  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.188394  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.188400  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.191880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:10.688056  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:10.688084  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:10.688096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:10.688104  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:10.691003  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:11.188165  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.188195  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.188206  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.188211  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.191751  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:11.192284  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:11.687697  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:11.687733  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:11.687744  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:11.687751  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:11.692471  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:12.187925  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.187959  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.187971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.187977  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.191580  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:12.687588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:12.687620  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:12.687631  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:12.687637  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:12.691690  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:13.187912  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.187949  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.187959  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.187964  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.191046  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.688329  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:13.688359  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:13.688370  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:13.688374  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:13.692160  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:13.692713  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:14.188174  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.188198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.188207  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.188210  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.197312  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:33:14.688323  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:14.688353  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:14.688364  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:14.688369  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:14.692255  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.188273  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.188299  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.188309  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.188312  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.191633  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:15.688194  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:15.688221  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:15.688229  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:15.688233  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:15.691201  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:16.188087  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.188118  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.188130  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.188136  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.191654  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:16.192613  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:16.688084  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:16.688116  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:16.688127  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:16.688131  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:16.691196  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.188046  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.188079  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.188091  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.188099  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.191563  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:17.687488  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:17.687515  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:17.687523  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:17.687527  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:17.692225  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:18.187466  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.187496  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.187508  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.187513  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.190916  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.688169  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:18.688198  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:18.688209  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:18.688214  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:18.691684  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:18.692180  766330 node_ready.go:53] node "ha-053933-m02" has status "Ready":"False"
	I1007 12:33:19.188410  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.188443  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.188455  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.188461  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.191778  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:19.687861  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:19.687898  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:19.687909  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:19.687918  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:19.692517  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.187370  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.187394  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.187404  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.187409  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.190680  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.688383  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.688409  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.688418  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.688422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.692411  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.692972  766330 node_ready.go:49] node "ha-053933-m02" has status "Ready":"True"
	I1007 12:33:20.692999  766330 node_ready.go:38] duration metric: took 14.005807631s for node "ha-053933-m02" to be "Ready" ...
	I1007 12:33:20.693012  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:20.693143  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:20.693154  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.693162  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.693165  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.697361  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:20.703660  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.703786  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:33:20.703796  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.703803  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.703807  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.707181  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.708043  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.708061  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.708069  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.708074  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.710812  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.711426  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.711448  766330 pod_ready.go:82] duration metric: took 7.751816ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711460  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.711526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:33:20.711534  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.711542  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.711545  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.714909  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.715901  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.715918  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.715927  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.715934  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.719555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.720647  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.720668  766330 pod_ready.go:82] duration metric: took 9.201382ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720679  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.720751  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:33:20.720759  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.720768  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.720773  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.723495  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.724196  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:20.724215  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.724226  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.724229  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.726952  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:20.727595  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:20.727616  766330 pod_ready.go:82] duration metric: took 6.930211ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727627  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:20.727692  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:20.727700  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.727714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.727718  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.731049  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:20.731750  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:20.731766  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:20.731786  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:20.731793  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:20.734880  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.228231  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.228260  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.228274  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.228281  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.231667  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.232387  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.232407  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.232416  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.232422  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.235588  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.728588  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:21.728616  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.728628  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.728635  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.732106  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:21.732770  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:21.732786  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:21.732795  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:21.732798  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:21.735773  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.228683  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:33:22.228711  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.228720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.228724  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232193  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.232808  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.232825  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.232834  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.232839  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.235792  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:22.236315  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.236338  766330 pod_ready.go:82] duration metric: took 1.508704734s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236354  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.236419  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:33:22.236427  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.236434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.236438  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.239818  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.288880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:22.288905  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.288915  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.288920  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.292489  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.293074  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.293096  766330 pod_ready.go:82] duration metric: took 56.735786ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.293107  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.488539  766330 request.go:632] Waited for 195.305457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488616  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:33:22.488627  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.488640  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.488646  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.492086  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.688457  766330 request.go:632] Waited for 195.312015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688532  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:22.688537  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.688546  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.688550  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.691998  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:22.692647  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:22.692670  766330 pod_ready.go:82] duration metric: took 399.55659ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.692683  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:22.888729  766330 request.go:632] Waited for 195.939419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888840  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:33:22.888849  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:22.888862  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:22.888872  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:22.892505  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.088565  766330 request.go:632] Waited for 195.365241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088643  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.088651  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.088662  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.088670  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.091637  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.092259  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.092277  766330 pod_ready.go:82] duration metric: took 399.588182ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.092289  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.289099  766330 request.go:632] Waited for 196.721146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289204  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:33:23.289216  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.289227  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.289236  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.292352  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.488835  766330 request.go:632] Waited for 195.58765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488907  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:23.488912  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.488920  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.488925  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.491857  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:33:23.492343  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.492364  766330 pod_ready.go:82] duration metric: took 400.067435ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.492375  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.688407  766330 request.go:632] Waited for 195.943093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688521  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:33:23.688529  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.688538  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.688543  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.692233  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:23.888501  766330 request.go:632] Waited for 195.323816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888614  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:23.888622  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:23.888633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:23.888639  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:23.892680  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:23.893104  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:23.893123  766330 pod_ready.go:82] duration metric: took 400.740542ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:23.893133  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.089301  766330 request.go:632] Waited for 196.068782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089368  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:33:24.089374  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.089388  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.089395  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.092648  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.288647  766330 request.go:632] Waited for 195.319776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288759  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:24.288778  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.288794  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.288805  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.292348  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.292959  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.292988  766330 pod_ready.go:82] duration metric: took 399.844819ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.293007  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.489072  766330 request.go:632] Waited for 195.96428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489149  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:33:24.489157  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.489167  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.489175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.492662  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.688896  766330 request.go:632] Waited for 195.439422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689009  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:33:24.689017  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.689029  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.689035  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.692350  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:24.692962  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:24.692988  766330 pod_ready.go:82] duration metric: took 399.970822ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.693003  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:24.889214  766330 request.go:632] Waited for 196.093786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889300  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:33:24.889309  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:24.889322  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:24.889329  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:24.892619  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.088740  766330 request.go:632] Waited for 195.405391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088815  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:33:25.088821  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.088831  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.088837  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.092543  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:33:25.093141  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:33:25.093166  766330 pod_ready.go:82] duration metric: took 400.155132ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:33:25.093183  766330 pod_ready.go:39] duration metric: took 4.400126454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:33:25.093213  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:33:25.093283  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:33:25.111694  766330 api_server.go:72] duration metric: took 18.825401123s to wait for apiserver process to appear ...
	I1007 12:33:25.111735  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:33:25.111762  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:33:25.118517  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:33:25.118624  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:33:25.118639  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.118651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.118656  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.119598  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:33:25.119715  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:33:25.119734  766330 api_server.go:131] duration metric: took 7.991573ms to wait for apiserver health ...
	I1007 12:33:25.119743  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:33:25.289166  766330 request.go:632] Waited for 169.340781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289250  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.289255  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.289263  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.289268  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.295241  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.299874  766330 system_pods.go:59] 17 kube-system pods found
	I1007 12:33:25.299914  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.299919  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.299923  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.299926  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.299929  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.299933  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.299938  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.299941  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.299944  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.299947  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.299950  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.299953  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.299956  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.299959  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.299962  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.300005  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.300042  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.300050  766330 system_pods.go:74] duration metric: took 180.300279ms to wait for pod list to return data ...
	I1007 12:33:25.300061  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:33:25.489349  766330 request.go:632] Waited for 189.154197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489422  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:33:25.489429  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.489441  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.489451  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.493783  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.494042  766330 default_sa.go:45] found service account: "default"
	I1007 12:33:25.494060  766330 default_sa.go:55] duration metric: took 193.9912ms for default service account to be created ...
	I1007 12:33:25.494070  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:33:25.688474  766330 request.go:632] Waited for 194.303496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688554  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:33:25.688560  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.688568  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.688572  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.694194  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:33:25.700121  766330 system_pods.go:86] 17 kube-system pods found
	I1007 12:33:25.700159  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:33:25.700167  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:33:25.700179  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:33:25.700185  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:33:25.700191  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:33:25.700196  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:33:25.700202  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:33:25.700207  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:33:25.700213  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:33:25.700218  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:33:25.700223  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:33:25.700228  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:33:25.700233  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:33:25.700242  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:33:25.700248  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:33:25.700255  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:33:25.700258  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:33:25.700266  766330 system_pods.go:126] duration metric: took 206.189927ms to wait for k8s-apps to be running ...
	I1007 12:33:25.700277  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:33:25.700338  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:25.716873  766330 system_svc.go:56] duration metric: took 16.577644ms WaitForService to wait for kubelet
	I1007 12:33:25.716918  766330 kubeadm.go:582] duration metric: took 19.430632885s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:33:25.716946  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:33:25.889445  766330 request.go:632] Waited for 172.381554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889527  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:33:25.889535  766330 round_trippers.go:469] Request Headers:
	I1007 12:33:25.889543  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:33:25.889547  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:33:25.893637  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:33:25.894406  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894446  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894466  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:33:25.894476  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:33:25.894483  766330 node_conditions.go:105] duration metric: took 177.530833ms to run NodePressure ...
	I1007 12:33:25.894499  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:33:25.894527  766330 start.go:255] writing updated cluster config ...
	I1007 12:33:25.896984  766330 out.go:201] 
	I1007 12:33:25.898622  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:25.898739  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.900470  766330 out.go:177] * Starting "ha-053933-m03" control-plane node in "ha-053933" cluster
	I1007 12:33:25.901744  766330 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:33:25.901777  766330 cache.go:56] Caching tarball of preloaded images
	I1007 12:33:25.901887  766330 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:33:25.901898  766330 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:33:25.901996  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:25.902210  766330 start.go:360] acquireMachinesLock for ha-053933-m03: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:33:25.902261  766330 start.go:364] duration metric: took 29.008µs to acquireMachinesLock for "ha-053933-m03"
	I1007 12:33:25.902279  766330 start.go:93] Provisioning new machine with config: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:25.902373  766330 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:33:25.903871  766330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:33:25.903977  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:25.904021  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:25.919504  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36877
	I1007 12:33:25.920002  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:25.920499  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:25.920525  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:25.920897  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:25.921112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:25.921261  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:25.921411  766330 start.go:159] libmachine.API.Create for "ha-053933" (driver="kvm2")
	I1007 12:33:25.921445  766330 client.go:168] LocalClient.Create starting
	I1007 12:33:25.921486  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 12:33:25.921530  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921554  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921635  766330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 12:33:25.921664  766330 main.go:141] libmachine: Decoding PEM data...
	I1007 12:33:25.921680  766330 main.go:141] libmachine: Parsing certificate...
	I1007 12:33:25.921706  766330 main.go:141] libmachine: Running pre-create checks...
	I1007 12:33:25.921718  766330 main.go:141] libmachine: (ha-053933-m03) Calling .PreCreateCheck
	I1007 12:33:25.921884  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:25.922300  766330 main.go:141] libmachine: Creating machine...
	I1007 12:33:25.922316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .Create
	I1007 12:33:25.922510  766330 main.go:141] libmachine: (ha-053933-m03) Creating KVM machine...
	I1007 12:33:25.923845  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing default KVM network
	I1007 12:33:25.924001  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found existing private KVM network mk-ha-053933
	I1007 12:33:25.924170  766330 main.go:141] libmachine: (ha-053933-m03) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:25.924210  766330 main.go:141] libmachine: (ha-053933-m03) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:33:25.924298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:25.924182  767113 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:25.924373  766330 main.go:141] libmachine: (ha-053933-m03) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:33:26.206977  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.206809  767113 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa...
	I1007 12:33:26.524415  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524231  767113 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk...
	I1007 12:33:26.524455  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing magic tar header
	I1007 12:33:26.524470  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Writing SSH key tar header
	I1007 12:33:26.524482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.524376  767113 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 ...
	I1007 12:33:26.524496  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03
	I1007 12:33:26.524534  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03 (perms=drwx------)
	I1007 12:33:26.524574  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 12:33:26.524585  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:33:26.524600  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 12:33:26.524609  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 12:33:26.524638  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:33:26.524653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 12:33:26.524661  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:33:26.524670  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:33:26.524678  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Checking permissions on dir: /home
	I1007 12:33:26.524693  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Skipping /home - not owner
	I1007 12:33:26.524703  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:33:26.524718  766330 main.go:141] libmachine: (ha-053933-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:33:26.524726  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.525722  766330 main.go:141] libmachine: (ha-053933-m03) define libvirt domain using xml: 
	I1007 12:33:26.525747  766330 main.go:141] libmachine: (ha-053933-m03) <domain type='kvm'>
	I1007 12:33:26.525776  766330 main.go:141] libmachine: (ha-053933-m03)   <name>ha-053933-m03</name>
	I1007 12:33:26.525795  766330 main.go:141] libmachine: (ha-053933-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:33:26.525808  766330 main.go:141] libmachine: (ha-053933-m03)   <vcpu>2</vcpu>
	I1007 12:33:26.525818  766330 main.go:141] libmachine: (ha-053933-m03)   <features>
	I1007 12:33:26.525830  766330 main.go:141] libmachine: (ha-053933-m03)     <acpi/>
	I1007 12:33:26.525838  766330 main.go:141] libmachine: (ha-053933-m03)     <apic/>
	I1007 12:33:26.525850  766330 main.go:141] libmachine: (ha-053933-m03)     <pae/>
	I1007 12:33:26.525859  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.525905  766330 main.go:141] libmachine: (ha-053933-m03)   </features>
	I1007 12:33:26.525934  766330 main.go:141] libmachine: (ha-053933-m03)   <cpu mode='host-passthrough'>
	I1007 12:33:26.525945  766330 main.go:141] libmachine: (ha-053933-m03)   
	I1007 12:33:26.525955  766330 main.go:141] libmachine: (ha-053933-m03)   </cpu>
	I1007 12:33:26.525965  766330 main.go:141] libmachine: (ha-053933-m03)   <os>
	I1007 12:33:26.525971  766330 main.go:141] libmachine: (ha-053933-m03)     <type>hvm</type>
	I1007 12:33:26.525976  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='cdrom'/>
	I1007 12:33:26.525983  766330 main.go:141] libmachine: (ha-053933-m03)     <boot dev='hd'/>
	I1007 12:33:26.525988  766330 main.go:141] libmachine: (ha-053933-m03)     <bootmenu enable='no'/>
	I1007 12:33:26.525995  766330 main.go:141] libmachine: (ha-053933-m03)   </os>
	I1007 12:33:26.526002  766330 main.go:141] libmachine: (ha-053933-m03)   <devices>
	I1007 12:33:26.526013  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='cdrom'>
	I1007 12:33:26.526054  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/boot2docker.iso'/>
	I1007 12:33:26.526067  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:33:26.526077  766330 main.go:141] libmachine: (ha-053933-m03)       <readonly/>
	I1007 12:33:26.526087  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526099  766330 main.go:141] libmachine: (ha-053933-m03)     <disk type='file' device='disk'>
	I1007 12:33:26.526109  766330 main.go:141] libmachine: (ha-053933-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:33:26.526124  766330 main.go:141] libmachine: (ha-053933-m03)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/ha-053933-m03.rawdisk'/>
	I1007 12:33:26.526142  766330 main.go:141] libmachine: (ha-053933-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:33:26.526153  766330 main.go:141] libmachine: (ha-053933-m03)     </disk>
	I1007 12:33:26.526162  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526172  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='mk-ha-053933'/>
	I1007 12:33:26.526180  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526189  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526201  766330 main.go:141] libmachine: (ha-053933-m03)     <interface type='network'>
	I1007 12:33:26.526212  766330 main.go:141] libmachine: (ha-053933-m03)       <source network='default'/>
	I1007 12:33:26.526219  766330 main.go:141] libmachine: (ha-053933-m03)       <model type='virtio'/>
	I1007 12:33:26.526233  766330 main.go:141] libmachine: (ha-053933-m03)     </interface>
	I1007 12:33:26.526252  766330 main.go:141] libmachine: (ha-053933-m03)     <serial type='pty'>
	I1007 12:33:26.526271  766330 main.go:141] libmachine: (ha-053933-m03)       <target port='0'/>
	I1007 12:33:26.526293  766330 main.go:141] libmachine: (ha-053933-m03)     </serial>
	I1007 12:33:26.526317  766330 main.go:141] libmachine: (ha-053933-m03)     <console type='pty'>
	I1007 12:33:26.526331  766330 main.go:141] libmachine: (ha-053933-m03)       <target type='serial' port='0'/>
	I1007 12:33:26.526341  766330 main.go:141] libmachine: (ha-053933-m03)     </console>
	I1007 12:33:26.526352  766330 main.go:141] libmachine: (ha-053933-m03)     <rng model='virtio'>
	I1007 12:33:26.526364  766330 main.go:141] libmachine: (ha-053933-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:33:26.526375  766330 main.go:141] libmachine: (ha-053933-m03)     </rng>
	I1007 12:33:26.526382  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526387  766330 main.go:141] libmachine: (ha-053933-m03)     
	I1007 12:33:26.526400  766330 main.go:141] libmachine: (ha-053933-m03)   </devices>
	I1007 12:33:26.526412  766330 main.go:141] libmachine: (ha-053933-m03) </domain>
	I1007 12:33:26.526422  766330 main.go:141] libmachine: (ha-053933-m03) 
	I1007 12:33:26.533781  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:c6:4c:5a in network default
	I1007 12:33:26.534377  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring networks are active...
	I1007 12:33:26.534401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.535036  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network default is active
	I1007 12:33:26.535318  766330 main.go:141] libmachine: (ha-053933-m03) Ensuring network mk-ha-053933 is active
	I1007 12:33:26.535654  766330 main.go:141] libmachine: (ha-053933-m03) Getting domain xml...
	I1007 12:33:26.536349  766330 main.go:141] libmachine: (ha-053933-m03) Creating domain...
	I1007 12:33:26.886582  766330 main.go:141] libmachine: (ha-053933-m03) Waiting to get IP...
	I1007 12:33:26.887435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:26.887805  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:26.887834  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:26.887787  767113 retry.go:31] will retry after 278.405187ms: waiting for machine to come up
	I1007 12:33:27.168357  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.168978  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.169005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.168920  767113 retry.go:31] will retry after 329.830323ms: waiting for machine to come up
	I1007 12:33:27.500231  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.500684  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.500728  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.500604  767113 retry.go:31] will retry after 372.653315ms: waiting for machine to come up
	I1007 12:33:27.875190  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:27.875624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:27.875654  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:27.875577  767113 retry.go:31] will retry after 444.943717ms: waiting for machine to come up
	I1007 12:33:28.322485  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.322945  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.322970  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.322909  767113 retry.go:31] will retry after 669.257582ms: waiting for machine to come up
	I1007 12:33:28.994144  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:28.994697  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:28.994715  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:28.994632  767113 retry.go:31] will retry after 733.137025ms: waiting for machine to come up
	I1007 12:33:29.729782  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:29.730264  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:29.730293  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:29.730214  767113 retry.go:31] will retry after 899.738353ms: waiting for machine to come up
	I1007 12:33:30.632328  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:30.632890  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:30.632916  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:30.632809  767113 retry.go:31] will retry after 931.890845ms: waiting for machine to come up
	I1007 12:33:31.566008  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:31.566423  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:31.566453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:31.566382  767113 retry.go:31] will retry after 1.324143868s: waiting for machine to come up
	I1007 12:33:32.892206  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:32.892600  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:32.892624  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:32.892560  767113 retry.go:31] will retry after 1.884957277s: waiting for machine to come up
	I1007 12:33:34.779972  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:34.780414  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:34.780482  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:34.780403  767113 retry.go:31] will retry after 2.797940617s: waiting for machine to come up
	I1007 12:33:37.580503  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:37.580938  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:37.581017  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:37.580916  767113 retry.go:31] will retry after 3.450180083s: waiting for machine to come up
	I1007 12:33:41.032804  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:41.033196  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:41.033227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:41.033144  767113 retry.go:31] will retry after 3.620491508s: waiting for machine to come up
	I1007 12:33:44.657262  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:44.657724  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find current IP address of domain ha-053933-m03 in network mk-ha-053933
	I1007 12:33:44.657749  766330 main.go:141] libmachine: (ha-053933-m03) DBG | I1007 12:33:44.657677  767113 retry.go:31] will retry after 4.652577623s: waiting for machine to come up
	I1007 12:33:49.314220  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314598  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.314619  766330 main.go:141] libmachine: (ha-053933-m03) Found IP for machine: 192.168.39.53
	I1007 12:33:49.314644  766330 main.go:141] libmachine: (ha-053933-m03) Reserving static IP address...
	I1007 12:33:49.315014  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "ha-053933-m03", mac: "52:54:00:92:71:bc", ip: "192.168.39.53"} in network mk-ha-053933
	I1007 12:33:49.395618  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:49.395664  766330 main.go:141] libmachine: (ha-053933-m03) Reserved static IP address: 192.168.39.53
	I1007 12:33:49.395679  766330 main.go:141] libmachine: (ha-053933-m03) Waiting for SSH to be available...
	I1007 12:33:49.398571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:49.398960  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933
	I1007 12:33:49.398990  766330 main.go:141] libmachine: (ha-053933-m03) DBG | unable to find defined IP address of network mk-ha-053933 interface with MAC address 52:54:00:92:71:bc
	I1007 12:33:49.399160  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:49.399184  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:49.399214  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:49.399227  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:49.399241  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:49.403005  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 12:33:49.403027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 12:33:49.403035  766330 main.go:141] libmachine: (ha-053933-m03) DBG | command : exit 0
	I1007 12:33:49.403039  766330 main.go:141] libmachine: (ha-053933-m03) DBG | err     : exit status 255
	I1007 12:33:49.403074  766330 main.go:141] libmachine: (ha-053933-m03) DBG | output  : 
	I1007 12:33:52.403247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Getting to WaitForSSH function...
	I1007 12:33:52.406252  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.406668  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.406699  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.407002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH client type: external
	I1007 12:33:52.407027  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa (-rw-------)
	I1007 12:33:52.407053  766330 main.go:141] libmachine: (ha-053933-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:33:52.407069  766330 main.go:141] libmachine: (ha-053933-m03) DBG | About to run SSH command:
	I1007 12:33:52.407109  766330 main.go:141] libmachine: (ha-053933-m03) DBG | exit 0
	I1007 12:33:52.534915  766330 main.go:141] libmachine: (ha-053933-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:33:52.535288  766330 main.go:141] libmachine: (ha-053933-m03) KVM machine creation complete!
	I1007 12:33:52.535635  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:52.536389  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536639  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:52.536874  766330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:33:52.536891  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetState
	I1007 12:33:52.538444  766330 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:33:52.538462  766330 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:33:52.538469  766330 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:33:52.538476  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.541542  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.541939  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.541963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.542112  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.542296  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542481  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.542677  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.542861  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.543138  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.543151  766330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:33:52.649741  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:52.649782  766330 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:33:52.649794  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.652589  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.652969  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.653002  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.653140  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.653374  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653551  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.653673  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.653873  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.654072  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.654084  766330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:33:52.759715  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:33:52.759834  766330 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:33:52.759854  766330 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:33:52.759868  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760230  766330 buildroot.go:166] provisioning hostname "ha-053933-m03"
	I1007 12:33:52.760268  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:52.760500  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.763370  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.763827  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.763857  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.764033  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.764271  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764477  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.764633  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.764776  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.764967  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.764978  766330 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933-m03 && echo "ha-053933-m03" | sudo tee /etc/hostname
	I1007 12:33:52.887558  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933-m03
	
	I1007 12:33:52.887587  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:52.890785  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891247  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:52.891281  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:52.891393  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:52.891600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.891855  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:52.892166  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:52.892433  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:52.892634  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:52.892651  766330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:33:53.009149  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:33:53.009337  766330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:33:53.009478  766330 buildroot.go:174] setting up certificates
	I1007 12:33:53.009552  766330 provision.go:84] configureAuth start
	I1007 12:33:53.009602  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetMachineName
	I1007 12:33:53.009986  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.012616  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.012988  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.013047  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.013159  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.015298  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015632  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.015653  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.015824  766330 provision.go:143] copyHostCerts
	I1007 12:33:53.015867  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.015916  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:33:53.015927  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:33:53.016009  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:33:53.016125  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016152  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:33:53.016162  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:33:53.016198  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:33:53.016272  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016302  766330 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:33:53.016310  766330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:33:53.016353  766330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:33:53.016436  766330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933-m03 san=[127.0.0.1 192.168.39.53 ha-053933-m03 localhost minikube]
	I1007 12:33:53.275511  766330 provision.go:177] copyRemoteCerts
	I1007 12:33:53.275578  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:33:53.275609  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.278571  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.278958  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.278997  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.279237  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.279470  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.279694  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.279856  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.365609  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:33:53.365705  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:33:53.394108  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:33:53.394203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:33:53.421846  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:33:53.421930  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:33:53.448310  766330 provision.go:87] duration metric: took 438.733854ms to configureAuth
	I1007 12:33:53.448346  766330 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:33:53.448616  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:53.448711  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.451435  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.451928  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.451963  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.452102  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.452316  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452472  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.452605  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.452784  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.452957  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.452971  766330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:33:53.686714  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:33:53.686753  766330 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:33:53.686762  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetURL
	I1007 12:33:53.688034  766330 main.go:141] libmachine: (ha-053933-m03) DBG | Using libvirt version 6000000
	I1007 12:33:53.690553  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691049  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.691081  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.691275  766330 main.go:141] libmachine: Docker is up and running!
	I1007 12:33:53.691309  766330 main.go:141] libmachine: Reticulating splines...
	I1007 12:33:53.691317  766330 client.go:171] duration metric: took 27.769860907s to LocalClient.Create
	I1007 12:33:53.691347  766330 start.go:167] duration metric: took 27.76993753s to libmachine.API.Create "ha-053933"
	I1007 12:33:53.691356  766330 start.go:293] postStartSetup for "ha-053933-m03" (driver="kvm2")
	I1007 12:33:53.691366  766330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:33:53.691384  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.691657  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:33:53.691683  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.693729  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694161  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.694191  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.694359  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.694535  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.694715  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.694900  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.777573  766330 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:33:53.782595  766330 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:33:53.782625  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:33:53.782710  766330 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:33:53.782804  766330 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:33:53.782816  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:33:53.782918  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:33:53.793716  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:53.819127  766330 start.go:296] duration metric: took 127.75028ms for postStartSetup
	I1007 12:33:53.819228  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetConfigRaw
	I1007 12:33:53.819965  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.822875  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823288  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.823318  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.823585  766330 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:33:53.823804  766330 start.go:128] duration metric: took 27.921419624s to createHost
	I1007 12:33:53.823830  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.826389  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826755  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.826788  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.826991  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.827187  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.827532  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.827708  766330 main.go:141] libmachine: Using SSH client type: native
	I1007 12:33:53.827909  766330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I1007 12:33:53.827922  766330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:33:53.935241  766330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304433.915881343
	
	I1007 12:33:53.935272  766330 fix.go:216] guest clock: 1728304433.915881343
	I1007 12:33:53.935282  766330 fix.go:229] Guest: 2024-10-07 12:33:53.915881343 +0000 UTC Remote: 2024-10-07 12:33:53.823818192 +0000 UTC m=+155.718348733 (delta=92.063151ms)
	I1007 12:33:53.935303  766330 fix.go:200] guest clock delta is within tolerance: 92.063151ms
	I1007 12:33:53.935309  766330 start.go:83] releasing machines lock for "ha-053933-m03", held for 28.033038751s
	I1007 12:33:53.935340  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.935600  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:53.938944  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.939372  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.939401  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.942103  766330 out.go:177] * Found network options:
	I1007 12:33:53.943700  766330 out.go:177]   - NO_PROXY=192.168.39.152,192.168.39.227
	W1007 12:33:53.945305  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.945333  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.945354  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946191  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946469  766330 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:33:53.946569  766330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:33:53.946621  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	W1007 12:33:53.946704  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:33:53.946780  766330 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:33:53.946900  766330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:33:53.946926  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:33:53.950981  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951020  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951403  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951437  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:53.951453  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951491  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:53.951686  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951876  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:33:53.951902  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952038  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:33:53.952066  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952209  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:33:53.952204  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:53.952359  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:33:54.197386  766330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:33:54.205923  766330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:33:54.206059  766330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:33:54.226436  766330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:33:54.226467  766330 start.go:495] detecting cgroup driver to use...
	I1007 12:33:54.226539  766330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:33:54.247190  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:33:54.263380  766330 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:33:54.263461  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:33:54.280192  766330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:33:54.297621  766330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:33:54.421983  766330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:33:54.595012  766330 docker.go:233] disabling docker service ...
	I1007 12:33:54.595103  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:33:54.611124  766330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:33:54.625647  766330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:33:54.766528  766330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:33:54.902157  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:33:54.917030  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:33:54.939198  766330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:33:54.939275  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.951699  766330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:33:54.951792  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.963943  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.975263  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:54.986454  766330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:33:54.998449  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.010053  766330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.029064  766330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:33:55.040536  766330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:33:55.051384  766330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:33:55.051443  766330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:33:55.065668  766330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:33:55.076166  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:55.212352  766330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:33:55.312005  766330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:33:55.312090  766330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:33:55.318387  766330 start.go:563] Will wait 60s for crictl version
	I1007 12:33:55.318471  766330 ssh_runner.go:195] Run: which crictl
	I1007 12:33:55.322868  766330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:33:55.367251  766330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:33:55.367355  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.397971  766330 ssh_runner.go:195] Run: crio --version
	I1007 12:33:55.435128  766330 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:33:55.436490  766330 out.go:177]   - env NO_PROXY=192.168.39.152
	I1007 12:33:55.437841  766330 out.go:177]   - env NO_PROXY=192.168.39.152,192.168.39.227
	I1007 12:33:55.439394  766330 main.go:141] libmachine: (ha-053933-m03) Calling .GetIP
	I1007 12:33:55.442218  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442572  766330 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:33:55.442593  766330 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:33:55.442854  766330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:33:55.447427  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:55.460437  766330 mustload.go:65] Loading cluster: ha-053933
	I1007 12:33:55.460787  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:33:55.461177  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.461238  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.477083  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1007 12:33:55.477627  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.478242  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.478264  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.478601  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.478770  766330 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:33:55.480358  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:55.480665  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:55.480703  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:55.497617  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I1007 12:33:55.498208  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:55.498771  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:55.498802  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:55.499144  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:55.499349  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:55.499537  766330 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.53
	I1007 12:33:55.499550  766330 certs.go:194] generating shared ca certs ...
	I1007 12:33:55.499567  766330 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.499698  766330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:33:55.499751  766330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:33:55.499772  766330 certs.go:256] generating profile certs ...
	I1007 12:33:55.499874  766330 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:33:55.499909  766330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23
	I1007 12:33:55.499931  766330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.53 192.168.39.254]
	I1007 12:33:55.566679  766330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 ...
	I1007 12:33:55.566718  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23: {Name:mk9518d7a648a9de4b8c05fe89f1c3f09f2c6a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.566929  766330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 ...
	I1007 12:33:55.566948  766330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23: {Name:mkdcb7e0de901ae74037605940d4a487b0fb8b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:33:55.567053  766330 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:33:55.567210  766330 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.2a803e23 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:33:55.567369  766330 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:33:55.567391  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:33:55.567411  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:33:55.567431  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:33:55.567450  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:33:55.567469  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:33:55.567488  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:33:55.567506  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:33:55.586158  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:33:55.586279  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:33:55.586335  766330 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:33:55.586352  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:33:55.586387  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:33:55.586425  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:33:55.586458  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:33:55.586517  766330 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:33:55.586558  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:33:55.586579  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:55.586598  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:33:55.586646  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:55.589684  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590162  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:55.590193  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:55.590365  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:55.590577  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:55.590763  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:55.590948  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:55.666401  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:33:55.672290  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:33:55.685836  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:33:55.691589  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 12:33:55.704365  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:33:55.709554  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:33:55.723585  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:33:55.728967  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:33:55.742781  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:33:55.747517  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:33:55.759055  766330 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:33:55.763953  766330 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:33:55.775294  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:33:55.802739  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:33:55.829606  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:33:55.854203  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:33:55.881501  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:33:55.907802  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:33:55.935368  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:33:55.966709  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:33:55.993237  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:33:56.018616  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:33:56.044579  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:33:56.069120  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:33:56.087293  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 12:33:56.105801  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:33:56.126196  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:33:56.145822  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:33:56.163980  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:33:56.182187  766330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:33:56.201073  766330 ssh_runner.go:195] Run: openssl version
	I1007 12:33:56.207142  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:33:56.218685  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.223978  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.224097  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:33:56.231835  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:33:56.243660  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:33:56.255269  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260456  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.260520  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:33:56.267451  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:33:56.279865  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:33:56.291556  766330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296671  766330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.296755  766330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:33:56.303021  766330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:33:56.314190  766330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:33:56.319184  766330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:33:56.319253  766330 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I1007 12:33:56.319359  766330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:33:56.319393  766330 kube-vip.go:115] generating kube-vip config ...
	I1007 12:33:56.319441  766330 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:33:56.337458  766330 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:33:56.337539  766330 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:33:56.337609  766330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.352182  766330 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:33:56.352262  766330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:33:56.364932  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.364895  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:33:56.365107  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:33:56.365108  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:33:56.364948  766330 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:33:56.365318  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.365380  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:33:56.386729  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:33:56.386794  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:33:56.386811  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:33:56.386844  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:33:56.386813  766330 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.387110  766330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:33:56.420143  766330 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:33:56.420202  766330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:33:57.371744  766330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:33:57.382647  766330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 12:33:57.402832  766330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:33:57.421823  766330 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:33:57.441482  766330 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:33:57.445627  766330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:33:57.459762  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:33:57.603405  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:33:57.624431  766330 host.go:66] Checking if "ha-053933" exists ...
	I1007 12:33:57.624969  766330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:33:57.625051  766330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:33:57.641787  766330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I1007 12:33:57.642353  766330 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:33:57.642903  766330 main.go:141] libmachine: Using API Version  1
	I1007 12:33:57.642925  766330 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:33:57.643307  766330 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:33:57.643533  766330 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:33:57.643693  766330 start.go:317] joinCluster: &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:33:57.643829  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:33:57.643846  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:33:57.646962  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647481  766330 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:33:57.647512  766330 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:33:57.647651  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:33:57.647823  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:33:57.647983  766330 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:33:57.648106  766330 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:33:57.973692  766330 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:33:57.973754  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I1007 12:34:20.692568  766330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7pzgfr.51k0s4v7v8nz4q6q --discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-053933-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.718770843s)
	I1007 12:34:20.692609  766330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:34:21.235276  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-053933-m03 minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=ha-053933 minikube.k8s.io/primary=false
	I1007 12:34:21.384823  766330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-053933-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:34:21.546452  766330 start.go:319] duration metric: took 23.902751753s to joinCluster
	I1007 12:34:21.546537  766330 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:34:21.547030  766330 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:34:21.548080  766330 out.go:177] * Verifying Kubernetes components...
	I1007 12:34:21.549612  766330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:34:21.823190  766330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:34:21.845870  766330 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:34:21.846263  766330 kapi.go:59] client config for ha-053933: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:34:21.846360  766330 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I1007 12:34:21.846701  766330 node_ready.go:35] waiting up to 6m0s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:21.846820  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:21.846832  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:21.846844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:21.846854  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:21.850883  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:22.347874  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.347909  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.347923  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.347929  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.351566  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:22.847344  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:22.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:22.847377  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:22.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:22.866723  766330 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1007 12:34:23.347347  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.347375  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.347387  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.347394  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.351929  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:23.847333  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:23.847355  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:23.847363  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:23.847372  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:23.850896  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:23.851597  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:24.347594  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.347622  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.347633  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.347638  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.351365  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:24.847338  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:24.847369  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:24.847382  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:24.847389  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:24.850525  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.347474  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.347501  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.347512  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.347517  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.350876  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:25.847008  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:25.847039  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:25.847047  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:25.847052  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:25.850192  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.347863  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.347891  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.347899  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.347903  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.351555  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:26.352073  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:26.847450  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:26.847477  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:26.847485  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:26.847489  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:26.851359  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.347145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.347169  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.347179  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.347185  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.350867  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:27.847674  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:27.847701  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:27.847710  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:27.847715  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:27.851381  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.346976  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.347004  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.347016  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.347020  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.350677  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:28.847299  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:28.847324  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:28.847334  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:28.847342  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:28.852124  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:28.852851  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:29.347470  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.347495  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.347506  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.347511  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.351169  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:29.847063  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:29.847088  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:29.847096  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:29.847101  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:29.850541  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:30.347314  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.347341  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.347349  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.347354  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.351677  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:30.847295  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:30.847322  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:30.847331  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:30.847337  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:30.851021  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.347887  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.347917  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.347928  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.347932  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.351855  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:31.352449  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:31.847880  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:31.847906  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:31.847914  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:31.847918  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:31.851368  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.347251  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.347285  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.347297  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.347304  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.351028  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:32.847346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:32.847371  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:32.847380  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:32.847385  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:32.850808  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.347425  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.347452  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.347461  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.347465  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.351213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:33.847937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:33.847961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:33.847976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:33.847981  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:33.852995  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:33.853973  766330 node_ready.go:53] node "ha-053933-m03" has status "Ready":"False"
	I1007 12:34:34.347964  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.347989  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.348006  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.348012  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.351982  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:34.847651  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:34.847676  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:34.847685  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:34.847690  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:34.851757  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.347354  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.347377  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.347386  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.347390  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.351104  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.847711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:35.847737  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.847748  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.847753  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.858606  766330 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:34:35.859308  766330 node_ready.go:49] node "ha-053933-m03" has status "Ready":"True"
	I1007 12:34:35.859333  766330 node_ready.go:38] duration metric: took 14.012608332s for node "ha-053933-m03" to be "Ready" ...
	I1007 12:34:35.859345  766330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:35.859442  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:35.859456  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.859468  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.859474  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.869218  766330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:34:35.877082  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.877211  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sj44v
	I1007 12:34:35.877225  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.877235  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.877246  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.881909  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.883332  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.883357  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.883368  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.883378  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.888505  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:35.889562  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.889584  766330 pod_ready.go:82] duration metric: took 12.462204ms for pod "coredns-7c65d6cfc9-sj44v" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889599  766330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.889693  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-tqtzn
	I1007 12:34:35.889703  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.889714  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.889720  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.894158  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.894859  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.894878  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.894888  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.894894  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.898314  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.898768  766330 pod_ready.go:93] pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.898786  766330 pod_ready.go:82] duration metric: took 9.180577ms for pod "coredns-7c65d6cfc9-tqtzn" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898799  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.898867  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933
	I1007 12:34:35.898875  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.898882  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.898885  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.903049  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:35.903727  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:35.903743  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.903754  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.903761  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.906490  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.907003  766330 pod_ready.go:93] pod "etcd-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.907073  766330 pod_ready.go:82] duration metric: took 8.251291ms for pod "etcd-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907112  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.907213  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m02
	I1007 12:34:35.907222  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.907230  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.907250  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.910128  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:35.910735  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:35.910749  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:35.910760  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:35.910767  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:35.914012  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:35.914767  766330 pod_ready.go:93] pod "etcd-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:35.914789  766330 pod_ready.go:82] duration metric: took 7.665567ms for pod "etcd-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:35.914802  766330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:36.048508  766330 request.go:632] Waited for 133.622997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048575  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.048580  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.048588  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.048592  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.052571  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.248730  766330 request.go:632] Waited for 195.373798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248827  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.248836  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.248844  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.248849  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.251932  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.448570  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.448595  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.448605  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.448610  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.452907  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:36.647847  766330 request.go:632] Waited for 194.331001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647936  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:36.647943  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.647951  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.647956  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.651933  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:36.915705  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:36.915729  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:36.915738  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:36.915742  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:36.919213  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.048315  766330 request.go:632] Waited for 128.338635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048400  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.048408  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.048424  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.048429  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.051185  766330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:34:37.415988  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.416012  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.416021  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.416026  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.419983  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.448134  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.448158  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.448168  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.448175  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.451453  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.915937  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-053933-m03
	I1007 12:34:37.915961  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.915971  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.915976  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.920167  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:37.921049  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:37.921073  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:37.921086  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:37.921093  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:37.924604  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:37.925286  766330 pod_ready.go:93] pod "etcd-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:37.925306  766330 pod_ready.go:82] duration metric: took 2.010496086s for pod "etcd-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:37.925324  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.048769  766330 request.go:632] Waited for 123.357964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933
	I1007 12:34:38.048854  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.048866  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.048882  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.052431  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.248516  766330 request.go:632] Waited for 195.362302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248623  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:38.248634  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.248644  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.248651  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.252242  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.252762  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.252784  766330 pod_ready.go:82] duration metric: took 327.452579ms for pod "kube-apiserver-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.252797  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.447801  766330 request.go:632] Waited for 194.917273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447884  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m02
	I1007 12:34:38.447889  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.447897  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.447902  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.451491  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:38.648627  766330 request.go:632] Waited for 196.37134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648711  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:38.648716  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.648722  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.648732  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.652823  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:38.653461  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:38.653480  766330 pod_ready.go:82] duration metric: took 400.67636ms for pod "kube-apiserver-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.653490  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:38.848685  766330 request.go:632] Waited for 195.113793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848846  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-053933-m03
	I1007 12:34:38.848879  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:38.848893  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:38.848898  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:38.853139  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:39.048666  766330 request.go:632] Waited for 194.422198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048757  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:39.048765  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.048773  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.048780  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.052403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.052899  766330 pod_ready.go:93] pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.052921  766330 pod_ready.go:82] duration metric: took 399.423284ms for pod "kube-apiserver-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.052935  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.248381  766330 request.go:632] Waited for 195.347943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248463  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933
	I1007 12:34:39.248470  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.248479  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.248532  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.252304  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.448654  766330 request.go:632] Waited for 195.421963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448774  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:39.448781  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.448789  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.448794  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.452418  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.452966  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.452987  766330 pod_ready.go:82] duration metric: took 400.045067ms for pod "kube-controller-manager-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.452997  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.648075  766330 request.go:632] Waited for 195.002627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648177  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m02
	I1007 12:34:39.648188  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.648196  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.648203  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.651698  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.848035  766330 request.go:632] Waited for 195.367175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848150  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:39.848170  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:39.848184  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:39.848192  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:39.851573  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:39.852402  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:39.852421  766330 pod_ready.go:82] duration metric: took 399.417648ms for pod "kube-controller-manager-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:39.852432  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.048539  766330 request.go:632] Waited for 196.032961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048627  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-053933-m03
	I1007 12:34:40.048633  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.048641  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.048647  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.052288  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.248694  766330 request.go:632] Waited for 195.442218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248809  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:40.248819  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.248829  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.248839  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.252540  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.253313  766330 pod_ready.go:93] pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.253337  766330 pod_ready.go:82] duration metric: took 400.899295ms for pod "kube-controller-manager-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.253349  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.448782  766330 request.go:632] Waited for 195.339385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448860  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7bwxp
	I1007 12:34:40.448867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.448879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.448899  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.452366  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.648273  766330 request.go:632] Waited for 194.918691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648346  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:40.648352  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.648361  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.648367  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.651885  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:40.652427  766330 pod_ready.go:93] pod "kube-proxy-7bwxp" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:40.652452  766330 pod_ready.go:82] duration metric: took 399.095883ms for pod "kube-proxy-7bwxp" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.652465  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:40.848579  766330 request.go:632] Waited for 196.00042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848642  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dqqj6
	I1007 12:34:40.848648  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:40.848657  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:40.848660  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:40.852403  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.048483  766330 request.go:632] Waited for 195.416905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048561  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:41.048566  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.048574  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.048582  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.052281  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.052757  766330 pod_ready.go:93] pod "kube-proxy-dqqj6" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.052775  766330 pod_ready.go:82] duration metric: took 400.298296ms for pod "kube-proxy-dqqj6" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.052785  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.247821  766330 request.go:632] Waited for 194.952122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247915  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zvblz
	I1007 12:34:41.247920  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.247942  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.247958  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.251753  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.447806  766330 request.go:632] Waited for 195.292745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447871  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:41.447876  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.447883  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.447887  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.451374  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.452013  766330 pod_ready.go:93] pod "kube-proxy-zvblz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.452035  766330 pod_ready.go:82] duration metric: took 399.242268ms for pod "kube-proxy-zvblz" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.452048  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.648060  766330 request.go:632] Waited for 195.92136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648145  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933
	I1007 12:34:41.648167  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.648176  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.648181  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.652281  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:41.848221  766330 request.go:632] Waited for 195.408754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848307  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933
	I1007 12:34:41.848321  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:41.848329  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:41.848332  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:41.851502  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:41.852147  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:41.852173  766330 pod_ready.go:82] duration metric: took 400.115446ms for pod "kube-scheduler-ha-053933" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:41.852186  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.048319  766330 request.go:632] Waited for 196.021861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048415  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m02
	I1007 12:34:42.048421  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.048429  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.048434  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.051904  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.247954  766330 request.go:632] Waited for 195.30672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248042  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m02
	I1007 12:34:42.248048  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.248056  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.248060  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.251799  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.252357  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.252378  766330 pod_ready.go:82] duration metric: took 400.185892ms for pod "kube-scheduler-ha-053933-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.252389  766330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.448570  766330 request.go:632] Waited for 196.083361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448644  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-053933-m03
	I1007 12:34:42.448649  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.448658  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.448665  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.452279  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.648464  766330 request.go:632] Waited for 195.372097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648558  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-053933-m03
	I1007 12:34:42.648567  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.648575  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.648587  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.651837  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:42.652442  766330 pod_ready.go:93] pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:34:42.652462  766330 pod_ready.go:82] duration metric: took 400.066938ms for pod "kube-scheduler-ha-053933-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:34:42.652473  766330 pod_ready.go:39] duration metric: took 6.79311586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:34:42.652490  766330 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:34:42.652549  766330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:34:42.669655  766330 api_server.go:72] duration metric: took 21.123075945s to wait for apiserver process to appear ...
	I1007 12:34:42.669686  766330 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:34:42.669721  766330 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I1007 12:34:42.677436  766330 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I1007 12:34:42.677526  766330 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I1007 12:34:42.677533  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.677545  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.677556  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.678540  766330 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:34:42.678609  766330 api_server.go:141] control plane version: v1.31.1
	I1007 12:34:42.678628  766330 api_server.go:131] duration metric: took 8.935272ms to wait for apiserver health ...
	I1007 12:34:42.678643  766330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:34:42.848087  766330 request.go:632] Waited for 169.34722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848178  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:42.848184  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:42.848192  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:42.848197  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:42.854471  766330 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:34:42.861098  766330 system_pods.go:59] 24 kube-system pods found
	I1007 12:34:42.861133  766330 system_pods.go:61] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:42.861137  766330 system_pods.go:61] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:42.861141  766330 system_pods.go:61] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:42.861145  766330 system_pods.go:61] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:42.861148  766330 system_pods.go:61] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:42.861151  766330 system_pods.go:61] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:42.861154  766330 system_pods.go:61] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:42.861157  766330 system_pods.go:61] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:42.861160  766330 system_pods.go:61] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:42.861163  766330 system_pods.go:61] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:42.861166  766330 system_pods.go:61] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:42.861170  766330 system_pods.go:61] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:42.861177  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:42.861180  766330 system_pods.go:61] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:42.861182  766330 system_pods.go:61] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:42.861185  766330 system_pods.go:61] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:42.861189  766330 system_pods.go:61] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:42.861191  766330 system_pods.go:61] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:42.861194  766330 system_pods.go:61] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:42.861197  766330 system_pods.go:61] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:42.861200  766330 system_pods.go:61] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:42.861203  766330 system_pods.go:61] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:42.861206  766330 system_pods.go:61] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:42.861212  766330 system_pods.go:61] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:42.861221  766330 system_pods.go:74] duration metric: took 182.569158ms to wait for pod list to return data ...
	I1007 12:34:42.861229  766330 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:34:43.048753  766330 request.go:632] Waited for 187.419479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048837  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:34:43.048867  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.048875  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.048879  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.053383  766330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:34:43.053574  766330 default_sa.go:45] found service account: "default"
	I1007 12:34:43.053596  766330 default_sa.go:55] duration metric: took 192.357019ms for default service account to be created ...
	I1007 12:34:43.053609  766330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:34:43.248358  766330 request.go:632] Waited for 194.661822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248434  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I1007 12:34:43.248457  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.248468  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.248480  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.254368  766330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:34:43.261575  766330 system_pods.go:86] 24 kube-system pods found
	I1007 12:34:43.261611  766330 system_pods.go:89] "coredns-7c65d6cfc9-sj44v" [268afc07-099f-4bed-bed4-7fdc7c64b948] Running
	I1007 12:34:43.261617  766330 system_pods.go:89] "coredns-7c65d6cfc9-tqtzn" [8b161488-236f-456d-9385-0ed32039f1c8] Running
	I1007 12:34:43.261621  766330 system_pods.go:89] "etcd-ha-053933" [63997434-cbf6-4a65-9fa8-a7ab043edddd] Running
	I1007 12:34:43.261625  766330 system_pods.go:89] "etcd-ha-053933-m02" [60f3534a-842d-4e71-9969-42a63eabe43a] Running
	I1007 12:34:43.261628  766330 system_pods.go:89] "etcd-ha-053933-m03" [b5203bce-d117-454b-904a-3ff1588b69cb] Running
	I1007 12:34:43.261632  766330 system_pods.go:89] "kindnet-4gmn6" [c532bcb5-a558-4246-87a7-540b2241a92d] Running
	I1007 12:34:43.261636  766330 system_pods.go:89] "kindnet-6tzch" [a01d220d-f69a-4de4-aae6-0f158e60bd2c] Running
	I1007 12:34:43.261641  766330 system_pods.go:89] "kindnet-cx4hw" [59831aaf-ad53-4176-abd5-902311b908bc] Running
	I1007 12:34:43.261646  766330 system_pods.go:89] "kube-apiserver-ha-053933" [4292cf26-48af-47d3-afc7-f53c840348b4] Running
	I1007 12:34:43.261651  766330 system_pods.go:89] "kube-apiserver-ha-053933-m02" [33366f2a-d3bd-475c-8154-f0b543f44ab0] Running
	I1007 12:34:43.261656  766330 system_pods.go:89] "kube-apiserver-ha-053933-m03" [7ea0a181-68ad-42cf-9043-b16b90306203] Running
	I1007 12:34:43.261665  766330 system_pods.go:89] "kube-controller-manager-ha-053933" [df0c2c43-27c7-4279-b60b-e1b1c0cc385e] Running
	I1007 12:34:43.261670  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m02" [b0818773-c362-4504-9614-310461bf0743] Running
	I1007 12:34:43.261679  766330 system_pods.go:89] "kube-controller-manager-ha-053933-m03" [c8035607-d60b-478a-b29e-2d52216f56c2] Running
	I1007 12:34:43.261684  766330 system_pods.go:89] "kube-proxy-7bwxp" [5be0956c-823b-4ca7-8d46-4f8e81e5bbb3] Running
	I1007 12:34:43.261689  766330 system_pods.go:89] "kube-proxy-dqqj6" [1c6e5f1b-fe5e-4a4e-9434-f8241710cb2c] Running
	I1007 12:34:43.261704  766330 system_pods.go:89] "kube-proxy-zvblz" [17f099a2-baf5-4091-83f2-c823a214ac10] Running
	I1007 12:34:43.261709  766330 system_pods.go:89] "kube-scheduler-ha-053933" [9a10d3da-5e83-4c4e-b085-50bdc88df86b] Running
	I1007 12:34:43.261713  766330 system_pods.go:89] "kube-scheduler-ha-053933-m02" [14178262-c6eb-477b-be10-edc42bb354b6] Running
	I1007 12:34:43.261719  766330 system_pods.go:89] "kube-scheduler-ha-053933-m03" [7bdf2416-44cb-4d26-940d-f03c8fe9aa8d] Running
	I1007 12:34:43.261722  766330 system_pods.go:89] "kube-vip-ha-053933" [88bffc39-13e8-4460-aac3-2aabffef9127] Running
	I1007 12:34:43.261730  766330 system_pods.go:89] "kube-vip-ha-053933-m02" [23a52b25-0324-416e-b983-e69f9851a55b] Running
	I1007 12:34:43.261736  766330 system_pods.go:89] "kube-vip-ha-053933-m03" [caf041f0-d94a-4756-9b69-d1ce53edeb44] Running
	I1007 12:34:43.261739  766330 system_pods.go:89] "storage-provisioner" [ac6bab3d-040f-4b93-9b26-1ce7e373ba68] Running
	I1007 12:34:43.261746  766330 system_pods.go:126] duration metric: took 208.130933ms to wait for k8s-apps to be running ...
	I1007 12:34:43.261758  766330 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:34:43.261819  766330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:34:43.278366  766330 system_svc.go:56] duration metric: took 16.59381ms WaitForService to wait for kubelet
	I1007 12:34:43.278406  766330 kubeadm.go:582] duration metric: took 21.731835186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:34:43.278428  766330 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:34:43.447722  766330 request.go:632] Waited for 169.191028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447802  766330 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I1007 12:34:43.447807  766330 round_trippers.go:469] Request Headers:
	I1007 12:34:43.447815  766330 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:34:43.447822  766330 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:34:43.451521  766330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:34:43.453111  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453136  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453151  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453154  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453158  766330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:34:43.453161  766330 node_conditions.go:123] node cpu capacity is 2
	I1007 12:34:43.453165  766330 node_conditions.go:105] duration metric: took 174.732727ms to run NodePressure ...
	I1007 12:34:43.453176  766330 start.go:241] waiting for startup goroutines ...
	I1007 12:34:43.453200  766330 start.go:255] writing updated cluster config ...
	I1007 12:34:43.453638  766330 ssh_runner.go:195] Run: rm -f paused
	I1007 12:34:43.510074  766330 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:34:43.512318  766330 out.go:177] * Done! kubectl is now configured to use "ha-053933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.254119149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304716254097739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09267192-5753-4cf2-99d9-0fd387668921 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.254703249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda477d1-3623-4ef3-8926-974837270a7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.254754407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda477d1-3623-4ef3-8926-974837270a7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.254974589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda477d1-3623-4ef3-8926-974837270a7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.302001980Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01975cf1-e3e9-469b-9282-2aae1a446ec0 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.302076706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01975cf1-e3e9-469b-9282-2aae1a446ec0 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.303855941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbabd833-8923-4d41-b7f9-0f415cb78cfb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.304264250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304716304241481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbabd833-8923-4d41-b7f9-0f415cb78cfb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.304824985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10167ca2-4bb4-466c-9243-0065d4b3a975 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.304897046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10167ca2-4bb4-466c-9243-0065d4b3a975 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.305211347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10167ca2-4bb4-466c-9243-0065d4b3a975 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.348980296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15252fae-db39-46ae-87e5-012ea0af3c25 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.349085686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15252fae-db39-46ae-87e5-012ea0af3c25 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.350331224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19a20aab-805e-4ce4-9fe0-f23f0e21e92f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.350880843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304716350851097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19a20aab-805e-4ce4-9fe0-f23f0e21e92f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.351475528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=275843c8-1e66-4dc3-9f1c-84379c894671 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.351610622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=275843c8-1e66-4dc3-9f1c-84379c894671 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.351853833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=275843c8-1e66-4dc3-9f1c-84379c894671 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.397463623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=779073bb-d901-4f74-9a4f-0e10720b7121 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.397609853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=779073bb-d901-4f74-9a4f-0e10720b7121 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.399039561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55e0f448-86c5-4f2a-9237-15fbacf0eec7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.399586244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304716399554329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55e0f448-86c5-4f2a-9237-15fbacf0eec7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.400580229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efa6b2fa-39be-446b-8d1f-2684453ac54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.400767729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efa6b2fa-39be-446b-8d1f-2684453ac54e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:38:36 ha-053933 crio[664]: time="2024-10-07 12:38:36.401608651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ba824fcefba6605ce22a7a059a66e3de9fd743f83e02534019bfe9e8fb517cb,PodSandboxId:e189556a18c9208c705057dda80a7ad5628be68e0e1db4b98852f83d41ba952e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728304487421016185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gx88f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee12293-4d71-4418-957b-7685c35307e1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5,PodSandboxId:89c61a059649dca0551337dc321afd0d70ac7bfc44a3f97e0a9127623ced186f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341576640234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sj44v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 268afc07-099f-4bed-bed4-7fdc7c64b948,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4,PodSandboxId:0d58c208fea1c152e772aa3d9a1aaeec54446b90db3d8bd7305fbe63b3463dea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728304341605740283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tqtzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8b161488-236f-456d-9385-0ed32039f1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416,PodSandboxId:8d79b5c178f5d3d581c45c57787d5733b64e62d645d2ddf60330233fafd473f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728304341514483808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6bab3d-040f-4b93-9b26-1ce7e373ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c,PodSandboxId:1546c9281ca68fdcfa1c46672ca52b916f5d3fb808682f701c1776f14399310a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283043
29423330416,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4gmn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c532bcb5-a558-4246-87a7-540b2241a92d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437,PodSandboxId:6bb33ce6417a6bd541becfad6bc063ebe650940eaf954e2652269bc110f076f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728304329081959911,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7bwxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be0956c-823b-4ca7-8d46-4f8e81e5bbb3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd,PodSandboxId:0e8b4b3150e401409ed46f4b147ecd274fcb25ead91f8a120710086610cc62e8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728304319593395636,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c327992018cf3adef604f8e7c0b6ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255,PodSandboxId:228ca0c55468f03f68ace6e851af10d3ff610fbc1bfd944cdcfe3da063207f21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728304317691388479,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83382111c0ed3e763a0e292bd03c0bd6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866,PodSandboxId:90cea5dfb2e910c8cc20093a2832f77447a7102842374a99ca0b0b82e9b7b05b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728304317636853413,Labels:map[string]string{io.kubernetes.container.name: etcd,io
.kubernetes.pod.name: etcd-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985190db4d35f4cd798aacc03f9ae11b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525,PodSandboxId:cd767df10cb415ce5cf48db3a69d1f5106e36e58885b1bacd27c6344033d5af5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728304317676618941,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-053933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58955b129f3757d64c09a77816310a8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38,PodSandboxId:706ba9f92d690df89f33093649d37c7208565908d8110de65250e6f86396e119,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728304317560180290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-053933,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb4419eb014ffb9581e9f43f41a3509a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efa6b2fa-39be-446b-8d1f-2684453ac54e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ba824fcefba6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e189556a18c92       busybox-7dff88458-gx88f
	2867817e1f480       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   0d58c208fea1c       coredns-7c65d6cfc9-tqtzn
	35044c701c165       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   89c61a059649d       coredns-7c65d6cfc9-sj44v
	3da0371dd7287       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   8d79b5c178f5d       storage-provisioner
	65adc93f12fb7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1546c9281ca68       kindnet-4gmn6
	aea74cdff9eee       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   6bb33ce6417a6       kube-proxy-7bwxp
	e756202203ed3       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   0e8b4b3150e40       kube-vip-ha-053933
	f190ed8ea3a7d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   228ca0c55468f       kube-controller-manager-ha-053933
	096488f001092       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cd767df10cb41       kube-scheduler-ha-053933
	fe11729317aca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   90cea5dfb2e91       etcd-ha-053933
	a23f58b62cf7a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   706ba9f92d690       kube-apiserver-ha-053933
	
	
	==> coredns [2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4] <==
	[INFO] 10.244.1.2:56331 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237909s
	[INFO] 10.244.1.2:36489 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015207s
	[INFO] 10.244.2.2:39298 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129286s
	[INFO] 10.244.2.2:47065 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177192s
	[INFO] 10.244.2.2:34384 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120996s
	[INFO] 10.244.2.2:55346 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176087s
	[INFO] 10.244.0.4:46975 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114471s
	[INFO] 10.244.0.4:58945 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225792s
	[INFO] 10.244.0.4:43259 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067959s
	[INFO] 10.244.0.4:34928 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001509847s
	[INFO] 10.244.0.4:46991 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079782s
	[INFO] 10.244.0.4:59761 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084499s
	[INFO] 10.244.1.2:49251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140128s
	[INFO] 10.244.1.2:33825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172303s
	[INFO] 10.244.2.2:58538 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000185922s
	[INFO] 10.244.0.4:44359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137041s
	[INFO] 10.244.0.4:58301 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099102s
	[INFO] 10.244.1.2:36803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222211s
	[INFO] 10.244.1.2:41006 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207899s
	[INFO] 10.244.1.2:43041 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129649s
	[INFO] 10.244.2.2:45405 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175032s
	[INFO] 10.244.2.2:36952 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000143195s
	[INFO] 10.244.0.4:39376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106075s
	[INFO] 10.244.0.4:60091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121535s
	[INFO] 10.244.0.4:37488 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084395s
	
	
	==> coredns [35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5] <==
	[INFO] 10.244.2.2:33316 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000351738s
	[INFO] 10.244.2.2:40861 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001441898s
	[INFO] 10.244.0.4:57140 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000078781s
	[INFO] 10.244.1.2:58603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135026s
	[INFO] 10.244.1.2:54055 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005238284s
	[INFO] 10.244.1.2:56033 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000250432s
	[INFO] 10.244.1.2:35801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000184148s
	[INFO] 10.244.1.2:59610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190826s
	[INFO] 10.244.2.2:33184 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001859772s
	[INFO] 10.244.2.2:46345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160195s
	[INFO] 10.244.2.2:58454 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001735681s
	[INFO] 10.244.2.2:51235 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213117s
	[INFO] 10.244.0.4:40361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002214882s
	[INFO] 10.244.0.4:35596 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091564s
	[INFO] 10.244.1.2:54454 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176281s
	[INFO] 10.244.1.2:54571 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089015s
	[INFO] 10.244.2.2:54102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258038s
	[INFO] 10.244.2.2:51160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106978s
	[INFO] 10.244.2.2:57393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167598s
	[INFO] 10.244.0.4:39801 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084483s
	[INFO] 10.244.0.4:60729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097532s
	[INFO] 10.244.1.2:36580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164463s
	[INFO] 10.244.2.2:48616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036575s
	[INFO] 10.244.2.2:54375 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256014s
	[INFO] 10.244.0.4:46032 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082269s
	
	
	==> describe nodes <==
	Name:               ha-053933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_32_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:08 +0000   Mon, 07 Oct 2024 12:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-053933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 081ddd3e0f204426846b528e120c10c6
	  System UUID:                081ddd3e-0f20-4426-846b-528e120c10c6
	  Boot ID:                    1dece28a-ef9e-423f-833d-5ccfd814e28e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gx88f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-7c65d6cfc9-sj44v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m28s
	  kube-system                 coredns-7c65d6cfc9-tqtzn             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m28s
	  kube-system                 etcd-ha-053933                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m32s
	  kube-system                 kindnet-4gmn6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m28s
	  kube-system                 kube-apiserver-ha-053933             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-ha-053933    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-proxy-7bwxp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-ha-053933             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-vip-ha-053933                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m26s  kube-proxy       
	  Normal  Starting                 6m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m32s  kubelet          Node ha-053933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s  kubelet          Node ha-053933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s  kubelet          Node ha-053933 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  NodeReady                6m16s  kubelet          Node ha-053933 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-053933 event: Registered Node ha-053933 in Controller
	
	
	Name:               ha-053933-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_33_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:33:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:35:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:35:05 +0000   Mon, 07 Oct 2024 12:36:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-053933-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea0094a740a940c483867f94cc6c27db
	  System UUID:                ea0094a7-40a9-40c4-8386-7f94cc6c27db
	  Boot ID:                    c270f988-c787-4383-b26b-ec82a3153fd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cll72                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-053933-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-cx4hw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-053933-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-ha-053933-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-proxy-zvblz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-053933-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-vip-ha-053933-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m34s                  cidrAllocator    Node ha-053933-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-053933-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-053933-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-053933-m02 event: Registered Node ha-053933-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-053933-m02 status is now: NodeNotReady
	
	
	Name:               ha-053933-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_34_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:18 +0000   Mon, 07 Oct 2024 12:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-053933-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2c62335e69d4ef7b1309ece17e10873
	  System UUID:                c2c62335-e69d-4ef7-b130-9ece17e10873
	  Boot ID:                    2e17b6e0-0617-4bea-8b9d-8cd903a9fcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fnvw9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-053933-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-6tzch                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-053933-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-053933-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-dqqj6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-053933-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-053933-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m19s                  cidrAllocator    Node ha-053933-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-053933-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-053933-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-053933-m03 event: Registered Node ha-053933-m03 in Controller
	
	
	Name:               ha-053933-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-053933-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-053933
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_35_18_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-053933-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:38:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:35:48 +0000   Mon, 07 Oct 2024 12:35:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-053933-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 114115be4a5e4a82bdbd4b86727c66b7
	  System UUID:                114115be-4a5e-4a82-bdbd-4b86727c66b7
	  Boot ID:                    dba1fc43-1911-4c9b-b57d-d3bef52a7eef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-874mt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m18s
	  kube-system                 kube-proxy-wmjjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m19s (x2 over 3m19s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x2 over 3m19s)  kubelet          Node ha-053933-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x2 over 3m19s)  kubelet          Node ha-053933-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m18s                  cidrAllocator    Node ha-053933-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-053933-m04 event: Registered Node ha-053933-m04 in Controller
	  Normal  NodeReady                3m1s                   kubelet          Node ha-053933-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050548] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040088] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.846047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.599830] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.647512] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.009818] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056187] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087371] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186817] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.108690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.296967] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.247594] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.068909] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.901650] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.502104] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 12:32] kauditd_printk_skb: 51 callbacks suppressed
	[  +1.286659] systemd-fstab-generator[1311]: Ignoring "noauto" option for root device
	[  +5.238921] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.342023] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 12:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866] <==
	{"level":"warn","ts":"2024-10-07T12:38:36.669906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.674855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.675006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.680205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.685730Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.694255Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.701458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.709030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.714864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.721326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.729834Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.738853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.743190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.746197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.752103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.759714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.769311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.774898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.779108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.780327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.780470Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.786111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.794796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.805197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:38:36.880417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"7f77dda0665c949d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:38:36 up 7 min,  0 users,  load average: 0.36, 0.21, 0.10
	Linux ha-053933 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c] <==
	I1007 12:38:00.816057       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:10.808104       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:10.808153       1 main.go:299] handling current node
	I1007 12:38:10.808168       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:10.808173       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:10.808359       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:10.808385       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:10.808430       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:10.808435       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:20.812716       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:20.812802       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:20.812961       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:20.812985       1 main.go:299] handling current node
	I1007 12:38:20.813004       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:20.813010       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	I1007 12:38:20.813053       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:20.813073       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:30.816733       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I1007 12:38:30.816763       1 main.go:322] Node ha-053933-m03 has CIDR [10.244.2.0/24] 
	I1007 12:38:30.816892       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I1007 12:38:30.816898       1 main.go:322] Node ha-053933-m04 has CIDR [10.244.3.0/24] 
	I1007 12:38:30.816986       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I1007 12:38:30.816993       1 main.go:299] handling current node
	I1007 12:38:30.817004       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I1007 12:38:30.817008       1 main.go:322] Node ha-053933-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38] <==
	I1007 12:32:02.949969       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1007 12:32:02.963249       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I1007 12:32:02.964729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:32:02.971941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:32:03.069138       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 12:32:03.964342       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 12:32:03.987254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:32:04.095813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:32:08.516111       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1007 12:32:08.611991       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1007 12:34:48.798901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37568: use of closed network connection
	E1007 12:34:49.000124       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37592: use of closed network connection
	E1007 12:34:49.206162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37608: use of closed network connection
	E1007 12:34:49.419763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37632: use of closed network connection
	E1007 12:34:49.618246       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37650: use of closed network connection
	E1007 12:34:49.830698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37678: use of closed network connection
	E1007 12:34:50.014306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37698: use of closed network connection
	E1007 12:34:50.203031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37722: use of closed network connection
	E1007 12:34:50.399836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37736: use of closed network connection
	E1007 12:34:50.721906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37754: use of closed network connection
	E1007 12:34:50.916874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37778: use of closed network connection
	E1007 12:34:51.129244       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37784: use of closed network connection
	E1007 12:34:51.331880       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37804: use of closed network connection
	E1007 12:34:51.534234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37816: use of closed network connection
	E1007 12:34:51.740225       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37836: use of closed network connection
	
	
	==> kube-controller-manager [f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255] <==
	E1007 12:35:18.261020       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-053933-m04': failed to patch node CIDR: Node \"ha-053933-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1007 12:35:18.261043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.267395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.419356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.886255       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:18.927634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m03"
	I1007 12:35:21.910317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.213570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.317164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.867893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:22.869105       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-053933-m04"
	I1007 12:35:22.944595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:28.233385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.043630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.044602       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:35:36.061944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:36.755307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:35:48.386926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m04"
	I1007 12:36:37.247180       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-053933-m04"
	I1007 12:36:37.247992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.283173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:37.296003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.649837ms"
	I1007 12:36:37.296097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.311µs"
	I1007 12:36:37.968993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	I1007 12:36:42.526972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-053933-m02"
	
	
	==> kube-proxy [aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:32:09.744772       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:32:09.779605       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	E1007 12:32:09.779729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:32:09.875780       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:32:09.875870       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:32:09.875896       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:32:09.899096       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:32:09.900043       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:32:09.900063       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:32:09.904977       1 config.go:199] "Starting service config controller"
	I1007 12:32:09.905625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:32:09.905998       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:32:09.906007       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:32:09.909098       1 config.go:328] "Starting node config controller"
	I1007 12:32:09.912651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:32:10.006461       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:32:10.006556       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:32:10.013752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525] <==
	W1007 12:32:02.522045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:32:02.522209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:32:02.691725       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:32:02.691861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 12:32:04.967169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 12:35:18.155212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.155405       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 055fbe2f-0b88-4875-9ee5-5672731cf7e9(kube-system/kindnet-tskmj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tskmj"
	E1007 12:35:18.155442       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tskmj\": pod kindnet-tskmj is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-tskmj"
	I1007 12:35:18.155464       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tskmj" node="ha-053933-m04"
	E1007 12:35:18.234037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.235784       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17a817ae-69ea-44f0-907d-a935057c340a(kube-system/kube-proxy-hkx4p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hkx4p"
	E1007 12:35:18.235899       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hkx4p\": pod kube-proxy-hkx4p is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-hkx4p"
	I1007 12:35:18.235923       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hkx4p" node="ha-053933-m04"
	E1007 12:35:18.234494       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.237640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fe0255b5-5ad9-4633-a28d-ecdf64a0267c(kube-system/kindnet-gbqh5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gbqh5"
	E1007 12:35:18.237709       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gbqh5\": pod kindnet-gbqh5 is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-gbqh5"
	I1007 12:35:18.237727       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gbqh5" node="ha-053933-m04"
	E1007 12:35:18.300436       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300714       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 71fc4648-ffa7-4b9c-b3be-35c98da41798(kube-system/kube-proxy-wmjjq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wmjjq"
	E1007 12:35:18.300906       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wmjjq\": pod kube-proxy-wmjjq is already assigned to node \"ha-053933-m04\"" pod="kube-system/kube-proxy-wmjjq"
	I1007 12:35:18.301040       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wmjjq" node="ha-053933-m04"
	E1007 12:35:18.300489       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	E1007 12:35:18.302463       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cbe2af3e-e15d-4855-b598-450159e1b100(kube-system/kindnet-874mt) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-874mt"
	E1007 12:35:18.302498       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-874mt\": pod kindnet-874mt is already assigned to node \"ha-053933-m04\"" pod="kube-system/kindnet-874mt"
	I1007 12:35:18.302596       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-874mt" node="ha-053933-m04"
	
	
	==> kubelet <==
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248076    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:04 ha-053933 kubelet[1318]: E1007 12:37:04.248142    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304624247762301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250603    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:14 ha-053933 kubelet[1318]: E1007 12:37:14.250995    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304634249677369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252717    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:24 ha-053933 kubelet[1318]: E1007 12:37:24.252763    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304644252330329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.255287    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:34 ha-053933 kubelet[1318]: E1007 12:37:34.257649    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304654253865298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.260273    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:44 ha-053933 kubelet[1318]: E1007 12:37:44.261117    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304664259181802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264814    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:37:54 ha-053933 kubelet[1318]: E1007 12:37:54.264871    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304674264030850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.151993    1318 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:38:04 ha-053933 kubelet[1318]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:38:04 ha-053933 kubelet[1318]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266021    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:04 ha-053933 kubelet[1318]: E1007 12:38:04.266073    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304684265661582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267592    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:14 ha-053933 kubelet[1318]: E1007 12:38:14.267615    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304694267325601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:24 ha-053933 kubelet[1318]: E1007 12:38:24.271756    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304704271343356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:24 ha-053933 kubelet[1318]: E1007 12:38:24.271782    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304704271343356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:34 ha-053933 kubelet[1318]: E1007 12:38:34.276225    1318 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304714275285889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:38:34 ha-053933 kubelet[1318]: E1007 12:38:34.276713    1318 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304714275285889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-053933 -n ha-053933
helpers_test.go:261: (dbg) Run:  kubectl --context ha-053933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-053933 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-053933 -v=7 --alsologtostderr
E1007 12:39:53.448928  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:40:13.698194  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:40:21.151644  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-053933 -v=7 --alsologtostderr: exit status 82 (2m1.777814739s)

                                                
                                                
-- stdout --
	* Stopping node "ha-053933-m04"  ...
	* Stopping node "ha-053933-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:38:37.963227  771514 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:38:37.963502  771514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:38:37.963512  771514 out.go:358] Setting ErrFile to fd 2...
	I1007 12:38:37.963519  771514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:38:37.963737  771514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:38:37.964025  771514 out.go:352] Setting JSON to false
	I1007 12:38:37.964148  771514 mustload.go:65] Loading cluster: ha-053933
	I1007 12:38:37.964605  771514 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:38:37.964707  771514 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:38:37.964906  771514 mustload.go:65] Loading cluster: ha-053933
	I1007 12:38:37.965041  771514 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:38:37.965077  771514 stop.go:39] StopHost: ha-053933-m04
	I1007 12:38:37.965455  771514 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:38:37.965498  771514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:38:37.981173  771514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1007 12:38:37.981697  771514 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:38:37.982304  771514 main.go:141] libmachine: Using API Version  1
	I1007 12:38:37.982335  771514 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:38:37.982662  771514 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:38:37.985801  771514 out.go:177] * Stopping node "ha-053933-m04"  ...
	I1007 12:38:37.987489  771514 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:38:37.987543  771514 main.go:141] libmachine: (ha-053933-m04) Calling .DriverName
	I1007 12:38:37.987906  771514 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:38:37.987935  771514 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHHostname
	I1007 12:38:37.990996  771514 main.go:141] libmachine: (ha-053933-m04) DBG | domain ha-053933-m04 has defined MAC address 52:54:00:4c:af:8f in network mk-ha-053933
	I1007 12:38:37.991469  771514 main.go:141] libmachine: (ha-053933-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:af:8f", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:35:06 +0000 UTC Type:0 Mac:52:54:00:4c:af:8f Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-053933-m04 Clientid:01:52:54:00:4c:af:8f}
	I1007 12:38:37.991496  771514 main.go:141] libmachine: (ha-053933-m04) DBG | domain ha-053933-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:4c:af:8f in network mk-ha-053933
	I1007 12:38:37.991642  771514 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHPort
	I1007 12:38:37.991856  771514 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHKeyPath
	I1007 12:38:37.992011  771514 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHUsername
	I1007 12:38:37.992185  771514 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m04/id_rsa Username:docker}
	I1007 12:38:38.077280  771514 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:38:38.133886  771514 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:38:38.188878  771514 main.go:141] libmachine: Stopping "ha-053933-m04"...
	I1007 12:38:38.188910  771514 main.go:141] libmachine: (ha-053933-m04) Calling .GetState
	I1007 12:38:38.190688  771514 main.go:141] libmachine: (ha-053933-m04) Calling .Stop
	I1007 12:38:38.194769  771514 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 0/120
	I1007 12:38:39.226586  771514 main.go:141] libmachine: (ha-053933-m04) Calling .GetState
	I1007 12:38:39.227933  771514 main.go:141] libmachine: Machine "ha-053933-m04" was stopped.
	I1007 12:38:39.227954  771514 stop.go:75] duration metric: took 1.240472902s to stop
	I1007 12:38:39.227977  771514 stop.go:39] StopHost: ha-053933-m03
	I1007 12:38:39.228326  771514 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:38:39.228382  771514 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:38:39.244395  771514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I1007 12:38:39.244862  771514 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:38:39.245379  771514 main.go:141] libmachine: Using API Version  1
	I1007 12:38:39.245406  771514 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:38:39.245764  771514 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:38:39.248124  771514 out.go:177] * Stopping node "ha-053933-m03"  ...
	I1007 12:38:39.249250  771514 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:38:39.249293  771514 main.go:141] libmachine: (ha-053933-m03) Calling .DriverName
	I1007 12:38:39.249577  771514 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:38:39.249608  771514 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHHostname
	I1007 12:38:39.252599  771514 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:38:39.253038  771514 main.go:141] libmachine: (ha-053933-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:71:bc", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:33:40 +0000 UTC Type:0 Mac:52:54:00:92:71:bc Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-053933-m03 Clientid:01:52:54:00:92:71:bc}
	I1007 12:38:39.253074  771514 main.go:141] libmachine: (ha-053933-m03) DBG | domain ha-053933-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:92:71:bc in network mk-ha-053933
	I1007 12:38:39.253258  771514 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHPort
	I1007 12:38:39.253428  771514 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHKeyPath
	I1007 12:38:39.253705  771514 main.go:141] libmachine: (ha-053933-m03) Calling .GetSSHUsername
	I1007 12:38:39.253871  771514 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m03/id_rsa Username:docker}
	I1007 12:38:39.344167  771514 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:38:39.407148  771514 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:38:39.462294  771514 main.go:141] libmachine: Stopping "ha-053933-m03"...
	I1007 12:38:39.462321  771514 main.go:141] libmachine: (ha-053933-m03) Calling .GetState
	I1007 12:38:39.464099  771514 main.go:141] libmachine: (ha-053933-m03) Calling .Stop
	I1007 12:38:39.467641  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 0/120
	I1007 12:38:40.469095  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 1/120
	I1007 12:38:41.470795  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 2/120
	I1007 12:38:42.472349  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 3/120
	I1007 12:38:43.473879  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 4/120
	I1007 12:38:44.475944  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 5/120
	I1007 12:38:45.477463  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 6/120
	I1007 12:38:46.479078  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 7/120
	I1007 12:38:47.480678  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 8/120
	I1007 12:38:48.483016  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 9/120
	I1007 12:38:49.485339  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 10/120
	I1007 12:38:50.487041  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 11/120
	I1007 12:38:51.489842  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 12/120
	I1007 12:38:52.491356  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 13/120
	I1007 12:38:53.492889  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 14/120
	I1007 12:38:54.495335  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 15/120
	I1007 12:38:55.497788  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 16/120
	I1007 12:38:56.499023  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 17/120
	I1007 12:38:57.500718  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 18/120
	I1007 12:38:58.502179  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 19/120
	I1007 12:38:59.504771  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 20/120
	I1007 12:39:00.506607  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 21/120
	I1007 12:39:01.509631  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 22/120
	I1007 12:39:02.511365  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 23/120
	I1007 12:39:03.512807  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 24/120
	I1007 12:39:04.514680  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 25/120
	I1007 12:39:05.516597  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 26/120
	I1007 12:39:06.518254  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 27/120
	I1007 12:39:07.520575  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 28/120
	I1007 12:39:08.522162  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 29/120
	I1007 12:39:09.523684  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 30/120
	I1007 12:39:10.525455  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 31/120
	I1007 12:39:11.527611  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 32/120
	I1007 12:39:12.529027  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 33/120
	I1007 12:39:13.530559  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 34/120
	I1007 12:39:14.532545  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 35/120
	I1007 12:39:15.534019  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 36/120
	I1007 12:39:16.535533  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 37/120
	I1007 12:39:17.536884  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 38/120
	I1007 12:39:18.538704  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 39/120
	I1007 12:39:19.541101  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 40/120
	I1007 12:39:20.542621  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 41/120
	I1007 12:39:21.544111  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 42/120
	I1007 12:39:22.545748  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 43/120
	I1007 12:39:23.547149  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 44/120
	I1007 12:39:24.548508  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 45/120
	I1007 12:39:25.550267  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 46/120
	I1007 12:39:26.552443  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 47/120
	I1007 12:39:27.553872  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 48/120
	I1007 12:39:28.555243  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 49/120
	I1007 12:39:29.557098  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 50/120
	I1007 12:39:30.558535  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 51/120
	I1007 12:39:31.559888  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 52/120
	I1007 12:39:32.561180  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 53/120
	I1007 12:39:33.562615  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 54/120
	I1007 12:39:34.564267  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 55/120
	I1007 12:39:35.565912  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 56/120
	I1007 12:39:36.567708  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 57/120
	I1007 12:39:37.569780  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 58/120
	I1007 12:39:38.571306  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 59/120
	I1007 12:39:39.573270  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 60/120
	I1007 12:39:40.574856  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 61/120
	I1007 12:39:41.576761  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 62/120
	I1007 12:39:42.578231  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 63/120
	I1007 12:39:43.579781  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 64/120
	I1007 12:39:44.581695  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 65/120
	I1007 12:39:45.583184  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 66/120
	I1007 12:39:46.584834  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 67/120
	I1007 12:39:47.586319  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 68/120
	I1007 12:39:48.588152  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 69/120
	I1007 12:39:49.590371  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 70/120
	I1007 12:39:50.591891  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 71/120
	I1007 12:39:51.594277  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 72/120
	I1007 12:39:52.596620  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 73/120
	I1007 12:39:53.598123  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 74/120
	I1007 12:39:54.599613  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 75/120
	I1007 12:39:55.601202  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 76/120
	I1007 12:39:56.602562  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 77/120
	I1007 12:39:57.604141  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 78/120
	I1007 12:39:58.606350  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 79/120
	I1007 12:39:59.608247  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 80/120
	I1007 12:40:00.609517  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 81/120
	I1007 12:40:01.610803  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 82/120
	I1007 12:40:02.612395  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 83/120
	I1007 12:40:03.614503  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 84/120
	I1007 12:40:04.616614  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 85/120
	I1007 12:40:05.618613  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 86/120
	I1007 12:40:06.620330  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 87/120
	I1007 12:40:07.622346  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 88/120
	I1007 12:40:08.624718  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 89/120
	I1007 12:40:09.627083  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 90/120
	I1007 12:40:10.628846  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 91/120
	I1007 12:40:11.630695  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 92/120
	I1007 12:40:12.632193  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 93/120
	I1007 12:40:13.633696  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 94/120
	I1007 12:40:14.635656  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 95/120
	I1007 12:40:15.637061  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 96/120
	I1007 12:40:16.638535  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 97/120
	I1007 12:40:17.640033  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 98/120
	I1007 12:40:18.641558  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 99/120
	I1007 12:40:19.643481  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 100/120
	I1007 12:40:20.644873  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 101/120
	I1007 12:40:21.646104  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 102/120
	I1007 12:40:22.647460  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 103/120
	I1007 12:40:23.649196  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 104/120
	I1007 12:40:24.651204  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 105/120
	I1007 12:40:25.652513  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 106/120
	I1007 12:40:26.654761  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 107/120
	I1007 12:40:27.656273  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 108/120
	I1007 12:40:28.657607  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 109/120
	I1007 12:40:29.659409  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 110/120
	I1007 12:40:30.660884  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 111/120
	I1007 12:40:31.662185  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 112/120
	I1007 12:40:32.663791  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 113/120
	I1007 12:40:33.665363  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 114/120
	I1007 12:40:34.667416  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 115/120
	I1007 12:40:35.668938  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 116/120
	I1007 12:40:36.670603  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 117/120
	I1007 12:40:37.672106  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 118/120
	I1007 12:40:38.674386  771514 main.go:141] libmachine: (ha-053933-m03) Waiting for machine to stop 119/120
	I1007 12:40:39.675618  771514 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 12:40:39.675696  771514 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 12:40:39.678284  771514 out.go:201] 
	W1007 12:40:39.679985  771514 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 12:40:39.680015  771514 out.go:270] * 
	* 
	W1007 12:40:39.683787  771514 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:40:39.685259  771514 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-053933 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-053933 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-053933 --wait=true -v=7 --alsologtostderr: (3m55.692274503s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-053933
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-053933 -n ha-053933
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 logs -n 25: (2.107105519s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m04 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp testdata/cp-test.txt                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m03 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-053933 node stop m02 -v=7                                                   | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-053933 node start m02 -v=7                                                  | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-053933 -v=7                                                         | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-053933 -v=7                                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-053933 --wait=true -v=7                                                  | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:44 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-053933                                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:40:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:40:39.747308  772013 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:40:39.747607  772013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:40:39.747617  772013 out.go:358] Setting ErrFile to fd 2...
	I1007 12:40:39.747622  772013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:40:39.747884  772013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:40:39.748589  772013 out.go:352] Setting JSON to false
	I1007 12:40:39.749662  772013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8589,"bootTime":1728296251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:40:39.749747  772013 start.go:139] virtualization: kvm guest
	I1007 12:40:39.752291  772013 out.go:177] * [ha-053933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:40:39.753891  772013 notify.go:220] Checking for updates...
	I1007 12:40:39.753925  772013 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:40:39.755658  772013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:40:39.757361  772013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:40:39.758836  772013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:40:39.760206  772013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:40:39.761581  772013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:40:39.763205  772013 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:40:39.763351  772013 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:40:39.763965  772013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:39.764046  772013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:39.780000  772013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I1007 12:40:39.780538  772013 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:39.781123  772013 main.go:141] libmachine: Using API Version  1
	I1007 12:40:39.781171  772013 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:39.781570  772013 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:39.781769  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:40:39.821261  772013 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:40:39.822755  772013 start.go:297] selected driver: kvm2
	I1007 12:40:39.822780  772013 start.go:901] validating driver "kvm2" against &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:40:39.823003  772013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:40:39.823397  772013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:40:39.823492  772013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:40:39.841212  772013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:40:39.841954  772013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:40:39.841999  772013 cni.go:84] Creating CNI manager for ""
	I1007 12:40:39.842120  772013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:40:39.842197  772013 start.go:340] cluster config:
	{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:40:39.842376  772013 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:40:39.846266  772013 out.go:177] * Starting "ha-053933" primary control-plane node in "ha-053933" cluster
	I1007 12:40:39.847976  772013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:40:39.848039  772013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:40:39.848054  772013 cache.go:56] Caching tarball of preloaded images
	I1007 12:40:39.848215  772013 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:40:39.848226  772013 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:40:39.848408  772013 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:40:39.848782  772013 start.go:360] acquireMachinesLock for ha-053933: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:40:39.848863  772013 start.go:364] duration metric: took 50.504µs to acquireMachinesLock for "ha-053933"
	I1007 12:40:39.848886  772013 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:40:39.848892  772013 fix.go:54] fixHost starting: 
	I1007 12:40:39.849220  772013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:39.849265  772013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:39.865058  772013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I1007 12:40:39.865482  772013 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:39.866044  772013 main.go:141] libmachine: Using API Version  1
	I1007 12:40:39.866092  772013 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:39.866443  772013 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:39.866684  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:40:39.866829  772013 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:40:39.868870  772013 fix.go:112] recreateIfNeeded on ha-053933: state=Running err=<nil>
	W1007 12:40:39.868894  772013 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:40:39.872266  772013 out.go:177] * Updating the running kvm2 "ha-053933" VM ...
	I1007 12:40:39.873705  772013 machine.go:93] provisionDockerMachine start ...
	I1007 12:40:39.873768  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:40:39.874119  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:39.876740  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.877381  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:39.877407  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.877623  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:39.877830  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.878069  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.878238  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:39.878417  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:39.878668  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:39.878679  772013 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:40:39.995492  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:40:39.995523  772013 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:40:39.995803  772013 buildroot.go:166] provisioning hostname "ha-053933"
	I1007 12:40:39.995825  772013 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:40:39.995959  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:39.998791  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.999191  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:39.999219  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.999340  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:39.999538  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.999685  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.999809  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:39.999978  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:40.000163  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:40.000175  772013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933 && echo "ha-053933" | sudo tee /etc/hostname
	I1007 12:40:40.125677  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:40:40.125727  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.128959  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.129632  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.129665  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.129956  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:40.130240  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.130424  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.130606  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:40.130769  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:40.131003  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:40.131020  772013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:40:40.255480  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:40:40.255514  772013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:40:40.255535  772013 buildroot.go:174] setting up certificates
	I1007 12:40:40.255545  772013 provision.go:84] configureAuth start
	I1007 12:40:40.255554  772013 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:40:40.255897  772013 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:40:40.259325  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.259896  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.259948  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.260193  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.262975  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.263589  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.263620  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.263901  772013 provision.go:143] copyHostCerts
	I1007 12:40:40.263939  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:40:40.263982  772013 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:40:40.264006  772013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:40:40.264118  772013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:40:40.264257  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:40:40.264285  772013 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:40:40.264294  772013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:40:40.264341  772013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:40:40.264434  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:40:40.264459  772013 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:40:40.264463  772013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:40:40.264489  772013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:40:40.264577  772013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933 san=[127.0.0.1 192.168.39.152 ha-053933 localhost minikube]
	I1007 12:40:40.323125  772013 provision.go:177] copyRemoteCerts
	I1007 12:40:40.323192  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:40:40.323223  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.326447  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.326858  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.326879  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.327208  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:40.327418  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.327580  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:40.327697  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:40:40.413859  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:40:40.413954  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:40:40.448110  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:40:40.448233  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:40:40.476030  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:40:40.476136  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:40:40.506002  772013 provision.go:87] duration metric: took 250.439962ms to configureAuth
	I1007 12:40:40.506060  772013 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:40:40.506333  772013 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:40:40.506422  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.508896  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.509246  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.509268  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.509523  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:40.509750  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.509911  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.510093  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:40.510258  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:40.510474  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:40.510493  772013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:42:11.486751  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:42:11.486789  772013 machine.go:96] duration metric: took 1m31.613066866s to provisionDockerMachine
	I1007 12:42:11.486808  772013 start.go:293] postStartSetup for "ha-053933" (driver="kvm2")
	I1007 12:42:11.486820  772013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:42:11.486846  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.487320  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:42:11.487356  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.490903  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.491487  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.491520  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.491779  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.491985  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.492227  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.492405  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:42:11.579592  772013 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:42:11.584794  772013 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:42:11.584823  772013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:42:11.584921  772013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:42:11.585026  772013 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:42:11.585039  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:42:11.585153  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:42:11.597523  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:42:11.628871  772013 start.go:296] duration metric: took 142.045848ms for postStartSetup
	I1007 12:42:11.628941  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.629334  772013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:42:11.629371  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.632308  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.632713  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.632742  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.633037  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.633328  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.633547  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.633691  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	W1007 12:42:11.717149  772013 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 12:42:11.717193  772013 fix.go:56] duration metric: took 1m31.868300995s for fixHost
	I1007 12:42:11.717218  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.720699  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.721065  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.721098  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.721235  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.721476  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.721634  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.721791  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.722080  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:42:11.722319  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:42:11.722334  772013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:42:11.827248  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304931.796013380
	
	I1007 12:42:11.827281  772013 fix.go:216] guest clock: 1728304931.796013380
	I1007 12:42:11.827289  772013 fix.go:229] Guest: 2024-10-07 12:42:11.79601338 +0000 UTC Remote: 2024-10-07 12:42:11.717201256 +0000 UTC m=+92.017887815 (delta=78.812124ms)
	I1007 12:42:11.827310  772013 fix.go:200] guest clock delta is within tolerance: 78.812124ms
	I1007 12:42:11.827335  772013 start.go:83] releasing machines lock for "ha-053933", held for 1m31.978440416s
	I1007 12:42:11.827359  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.827670  772013 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:42:11.830278  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.830613  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.830639  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.830783  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.831340  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.831531  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.831641  772013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:42:11.831687  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.831750  772013 ssh_runner.go:195] Run: cat /version.json
	I1007 12:42:11.831782  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.834266  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.834630  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.834652  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.834671  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.834819  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.835017  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.835183  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.835223  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.835244  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.835331  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:42:11.835460  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.835624  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.835805  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.835951  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:42:11.937074  772013 ssh_runner.go:195] Run: systemctl --version
	I1007 12:42:11.943647  772013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:42:12.110936  772013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:42:12.120426  772013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:42:12.120517  772013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:42:12.130619  772013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:42:12.130652  772013 start.go:495] detecting cgroup driver to use...
	I1007 12:42:12.130740  772013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:42:12.149062  772013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:42:12.164923  772013 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:42:12.164999  772013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:42:12.179655  772013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:42:12.193778  772013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:42:12.347434  772013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:42:12.533189  772013 docker.go:233] disabling docker service ...
	I1007 12:42:12.533269  772013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:42:12.550270  772013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:42:12.565489  772013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:42:12.716554  772013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:42:12.867547  772013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:42:12.883421  772013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:42:12.905499  772013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:42:12.905570  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.917270  772013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:42:12.917337  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.929494  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.941341  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.952765  772013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:42:12.964956  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.977031  772013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.989852  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:13.001605  772013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:42:13.012274  772013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:42:13.022415  772013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:13.167592  772013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:42:14.577887  772013 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.410247757s)
	I1007 12:42:14.577932  772013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:42:14.578011  772013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:42:14.583340  772013 start.go:563] Will wait 60s for crictl version
	I1007 12:42:14.583403  772013 ssh_runner.go:195] Run: which crictl
	I1007 12:42:14.587722  772013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:42:14.628066  772013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:42:14.628176  772013 ssh_runner.go:195] Run: crio --version
	I1007 12:42:14.659543  772013 ssh_runner.go:195] Run: crio --version
	I1007 12:42:14.694014  772013 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:42:14.695726  772013 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:42:14.698789  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:14.699155  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:14.699180  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:14.699409  772013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:42:14.704718  772013 kubeadm.go:883] updating cluster {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:42:14.704932  772013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:42:14.704982  772013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:42:14.751121  772013 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:42:14.751146  772013 crio.go:433] Images already preloaded, skipping extraction
	I1007 12:42:14.751196  772013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:42:14.791881  772013 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:42:14.791909  772013 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:42:14.791920  772013 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.31.1 crio true true} ...
	I1007 12:42:14.792053  772013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:42:14.792127  772013 ssh_runner.go:195] Run: crio config
	I1007 12:42:14.845441  772013 cni.go:84] Creating CNI manager for ""
	I1007 12:42:14.845466  772013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:42:14.845478  772013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:42:14.845504  772013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-053933 NodeName:ha-053933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:42:14.845654  772013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-053933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:42:14.845675  772013 kube-vip.go:115] generating kube-vip config ...
	I1007 12:42:14.845719  772013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:42:14.857822  772013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:42:14.857986  772013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:42:14.858072  772013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:42:14.868513  772013 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:42:14.868587  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:42:14.878678  772013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:42:14.898096  772013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:42:14.916929  772013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:42:14.935886  772013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:42:14.955589  772013 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:42:14.960312  772013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:15.110512  772013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:42:15.127901  772013 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.152
	I1007 12:42:15.127952  772013 certs.go:194] generating shared ca certs ...
	I1007 12:42:15.127972  772013 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:15.128186  772013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:42:15.128242  772013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:42:15.128257  772013 certs.go:256] generating profile certs ...
	I1007 12:42:15.128363  772013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:42:15.128400  772013 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449
	I1007 12:42:15.128422  772013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.53 192.168.39.254]
	I1007 12:42:15.355694  772013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449 ...
	I1007 12:42:15.355740  772013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449: {Name:mk8ee9f722a829f235f87c2c2735b8033288c6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:15.355930  772013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449 ...
	I1007 12:42:15.355945  772013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449: {Name:mke69c584b2945b40f89d2813d68d7bc38f89ffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:15.356017  772013 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:42:15.356158  772013 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:42:15.356296  772013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:42:15.356314  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:42:15.356328  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:42:15.356341  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:42:15.356354  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:42:15.356365  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:42:15.356377  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:42:15.356389  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:42:15.356399  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:42:15.356445  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:42:15.356476  772013 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:42:15.356485  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:42:15.356507  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:42:15.356528  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:42:15.356549  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:42:15.356586  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:42:15.356611  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.356625  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.356638  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.357272  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:42:15.385032  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:42:15.416498  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:42:15.442011  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:42:15.468627  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 12:42:15.496004  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:42:15.523681  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:42:15.551081  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:42:15.578890  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:42:15.605869  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:42:15.630987  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:42:15.656204  772013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:42:15.674074  772013 ssh_runner.go:195] Run: openssl version
	I1007 12:42:15.680450  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:42:15.692649  772013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.697687  772013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.697781  772013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.704199  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:42:15.714248  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:42:15.725428  772013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.730244  772013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.730305  772013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.736310  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:42:15.746565  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:42:15.758662  772013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.763625  772013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.763701  772013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.770190  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:42:15.780976  772013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:42:15.786490  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:42:15.793734  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:42:15.800095  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:42:15.806303  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:42:15.812413  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:42:15.818165  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:42:15.824513  772013 kubeadm.go:392] StartCluster: {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:42:15.824680  772013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:42:15.824730  772013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:42:15.870436  772013 cri.go:89] found id: "a0cbd21935c129b01b1598faa66de584fbfd95ba6d2d4550d57325208a54e86b"
	I1007 12:42:15.870469  772013 cri.go:89] found id: "41fb0ba54f670cd0ca5f39e057da080362b2cfa9d38a6da0262dcb0073427d52"
	I1007 12:42:15.870476  772013 cri.go:89] found id: "78f4113edc9360ea5eeadd4314b5b4b87c16309da4db3db1eb9b9a7e0da0e78b"
	I1007 12:42:15.870481  772013 cri.go:89] found id: "2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4"
	I1007 12:42:15.870485  772013 cri.go:89] found id: "35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5"
	I1007 12:42:15.870490  772013 cri.go:89] found id: "3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416"
	I1007 12:42:15.870494  772013 cri.go:89] found id: "65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c"
	I1007 12:42:15.870498  772013 cri.go:89] found id: "aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437"
	I1007 12:42:15.870502  772013 cri.go:89] found id: "e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd"
	I1007 12:42:15.870512  772013 cri.go:89] found id: "f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255"
	I1007 12:42:15.870517  772013 cri.go:89] found id: "096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525"
	I1007 12:42:15.870521  772013 cri.go:89] found id: "fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866"
	I1007 12:42:15.870525  772013 cri.go:89] found id: "a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38"
	I1007 12:42:15.870529  772013 cri.go:89] found id: ""
	I1007 12:42:15.870584  772013 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-053933 -n ha-053933
helpers_test.go:261: (dbg) Run:  kubectl --context ha-053933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 stop -v=7 --alsologtostderr
E1007 12:45:13.698173  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:46:36.765224  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-053933 stop -v=7 --alsologtostderr: exit status 82 (2m0.511470859s)

                                                
                                                
-- stdout --
	* Stopping node "ha-053933-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:44:55.432054  773710 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:44:55.432333  773710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:44:55.432344  773710 out.go:358] Setting ErrFile to fd 2...
	I1007 12:44:55.432350  773710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:44:55.432541  773710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:44:55.432859  773710 out.go:352] Setting JSON to false
	I1007 12:44:55.432973  773710 mustload.go:65] Loading cluster: ha-053933
	I1007 12:44:55.433522  773710 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:44:55.433679  773710 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:44:55.433930  773710 mustload.go:65] Loading cluster: ha-053933
	I1007 12:44:55.434164  773710 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:44:55.434224  773710 stop.go:39] StopHost: ha-053933-m04
	I1007 12:44:55.434659  773710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:44:55.434721  773710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:44:55.451357  773710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37891
	I1007 12:44:55.451936  773710 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:44:55.452642  773710 main.go:141] libmachine: Using API Version  1
	I1007 12:44:55.452669  773710 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:44:55.453015  773710 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:44:55.456130  773710 out.go:177] * Stopping node "ha-053933-m04"  ...
	I1007 12:44:55.457534  773710 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:44:55.457587  773710 main.go:141] libmachine: (ha-053933-m04) Calling .DriverName
	I1007 12:44:55.457863  773710 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:44:55.457905  773710 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHHostname
	I1007 12:44:55.461193  773710 main.go:141] libmachine: (ha-053933-m04) DBG | domain ha-053933-m04 has defined MAC address 52:54:00:4c:af:8f in network mk-ha-053933
	I1007 12:44:55.461757  773710 main.go:141] libmachine: (ha-053933-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:af:8f", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:44:22 +0000 UTC Type:0 Mac:52:54:00:4c:af:8f Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-053933-m04 Clientid:01:52:54:00:4c:af:8f}
	I1007 12:44:55.461807  773710 main.go:141] libmachine: (ha-053933-m04) DBG | domain ha-053933-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:4c:af:8f in network mk-ha-053933
	I1007 12:44:55.461938  773710 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHPort
	I1007 12:44:55.462168  773710 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHKeyPath
	I1007 12:44:55.462341  773710 main.go:141] libmachine: (ha-053933-m04) Calling .GetSSHUsername
	I1007 12:44:55.462582  773710 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933-m04/id_rsa Username:docker}
	I1007 12:44:55.553423  773710 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:44:55.607894  773710 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:44:55.660796  773710 main.go:141] libmachine: Stopping "ha-053933-m04"...
	I1007 12:44:55.660843  773710 main.go:141] libmachine: (ha-053933-m04) Calling .GetState
	I1007 12:44:55.662524  773710 main.go:141] libmachine: (ha-053933-m04) Calling .Stop
	I1007 12:44:55.666487  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 0/120
	I1007 12:44:56.668207  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 1/120
	I1007 12:44:57.669633  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 2/120
	I1007 12:44:58.671243  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 3/120
	I1007 12:44:59.672911  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 4/120
	I1007 12:45:00.674545  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 5/120
	I1007 12:45:01.675911  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 6/120
	I1007 12:45:02.677391  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 7/120
	I1007 12:45:03.679095  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 8/120
	I1007 12:45:04.680769  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 9/120
	I1007 12:45:05.683076  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 10/120
	I1007 12:45:06.684974  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 11/120
	I1007 12:45:07.686548  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 12/120
	I1007 12:45:08.687759  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 13/120
	I1007 12:45:09.689197  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 14/120
	I1007 12:45:10.691301  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 15/120
	I1007 12:45:11.692702  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 16/120
	I1007 12:45:12.694335  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 17/120
	I1007 12:45:13.695807  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 18/120
	I1007 12:45:14.698089  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 19/120
	I1007 12:45:15.700460  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 20/120
	I1007 12:45:16.701507  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 21/120
	I1007 12:45:17.703908  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 22/120
	I1007 12:45:18.705134  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 23/120
	I1007 12:45:19.707473  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 24/120
	I1007 12:45:20.709533  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 25/120
	I1007 12:45:21.711180  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 26/120
	I1007 12:45:22.712484  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 27/120
	I1007 12:45:23.713986  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 28/120
	I1007 12:45:24.715862  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 29/120
	I1007 12:45:25.717617  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 30/120
	I1007 12:45:26.719051  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 31/120
	I1007 12:45:27.720373  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 32/120
	I1007 12:45:28.721888  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 33/120
	I1007 12:45:29.723400  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 34/120
	I1007 12:45:30.725624  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 35/120
	I1007 12:45:31.727332  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 36/120
	I1007 12:45:32.729154  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 37/120
	I1007 12:45:33.730891  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 38/120
	I1007 12:45:34.732527  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 39/120
	I1007 12:45:35.734644  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 40/120
	I1007 12:45:36.736913  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 41/120
	I1007 12:45:37.738771  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 42/120
	I1007 12:45:38.740320  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 43/120
	I1007 12:45:39.742196  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 44/120
	I1007 12:45:40.744342  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 45/120
	I1007 12:45:41.745877  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 46/120
	I1007 12:45:42.747298  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 47/120
	I1007 12:45:43.748699  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 48/120
	I1007 12:45:44.750242  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 49/120
	I1007 12:45:45.751918  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 50/120
	I1007 12:45:46.754514  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 51/120
	I1007 12:45:47.756483  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 52/120
	I1007 12:45:48.759009  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 53/120
	I1007 12:45:49.760805  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 54/120
	I1007 12:45:50.762887  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 55/120
	I1007 12:45:51.764672  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 56/120
	I1007 12:45:52.766055  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 57/120
	I1007 12:45:53.767648  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 58/120
	I1007 12:45:54.769466  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 59/120
	I1007 12:45:55.771626  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 60/120
	I1007 12:45:56.773125  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 61/120
	I1007 12:45:57.775106  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 62/120
	I1007 12:45:58.776705  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 63/120
	I1007 12:45:59.779070  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 64/120
	I1007 12:46:00.780996  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 65/120
	I1007 12:46:01.782698  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 66/120
	I1007 12:46:02.785278  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 67/120
	I1007 12:46:03.787988  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 68/120
	I1007 12:46:04.789481  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 69/120
	I1007 12:46:05.790979  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 70/120
	I1007 12:46:06.792687  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 71/120
	I1007 12:46:07.794238  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 72/120
	I1007 12:46:08.796696  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 73/120
	I1007 12:46:09.798557  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 74/120
	I1007 12:46:10.800630  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 75/120
	I1007 12:46:11.803049  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 76/120
	I1007 12:46:12.804972  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 77/120
	I1007 12:46:13.806604  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 78/120
	I1007 12:46:14.808613  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 79/120
	I1007 12:46:15.810928  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 80/120
	I1007 12:46:16.812563  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 81/120
	I1007 12:46:17.814081  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 82/120
	I1007 12:46:18.815551  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 83/120
	I1007 12:46:19.817160  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 84/120
	I1007 12:46:20.819475  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 85/120
	I1007 12:46:21.820897  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 86/120
	I1007 12:46:22.823330  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 87/120
	I1007 12:46:23.824813  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 88/120
	I1007 12:46:24.827231  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 89/120
	I1007 12:46:25.828932  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 90/120
	I1007 12:46:26.830505  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 91/120
	I1007 12:46:27.831992  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 92/120
	I1007 12:46:28.833610  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 93/120
	I1007 12:46:29.835304  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 94/120
	I1007 12:46:30.837837  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 95/120
	I1007 12:46:31.839444  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 96/120
	I1007 12:46:32.841484  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 97/120
	I1007 12:46:33.843175  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 98/120
	I1007 12:46:34.844838  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 99/120
	I1007 12:46:35.846982  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 100/120
	I1007 12:46:36.848682  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 101/120
	I1007 12:46:37.850373  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 102/120
	I1007 12:46:38.852974  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 103/120
	I1007 12:46:39.855412  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 104/120
	I1007 12:46:40.857197  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 105/120
	I1007 12:46:41.858758  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 106/120
	I1007 12:46:42.861374  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 107/120
	I1007 12:46:43.863017  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 108/120
	I1007 12:46:44.864374  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 109/120
	I1007 12:46:45.866503  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 110/120
	I1007 12:46:46.867948  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 111/120
	I1007 12:46:47.869326  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 112/120
	I1007 12:46:48.870995  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 113/120
	I1007 12:46:49.872478  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 114/120
	I1007 12:46:50.874698  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 115/120
	I1007 12:46:51.876800  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 116/120
	I1007 12:46:52.878549  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 117/120
	I1007 12:46:53.880017  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 118/120
	I1007 12:46:54.882138  773710 main.go:141] libmachine: (ha-053933-m04) Waiting for machine to stop 119/120
	I1007 12:46:55.883440  773710 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 12:46:55.883526  773710 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 12:46:55.885699  773710 out.go:201] 
	W1007 12:46:55.887452  773710 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 12:46:55.887470  773710 out.go:270] * 
	* 
	W1007 12:46:55.891107  773710 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:46:55.892579  773710 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-053933 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr: (18.908287027s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-053933 -n ha-053933
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 logs -n 25: (2.069498753s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m04 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp testdata/cp-test.txt                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt                     |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933 sudo cat                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933.txt                               |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m02 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n                                                               | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | ha-053933-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-053933 ssh -n ha-053933-m03 sudo cat                                        | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC | 07 Oct 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-053933 node stop m02 -v=7                                                   | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:35 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-053933 node start m02 -v=7                                                  | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-053933 -v=7                                                         | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-053933 -v=7                                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:38 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-053933 --wait=true -v=7                                                  | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:44 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-053933                                                              | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC |                     |
	| node    | ha-053933 node delete m03 -v=7                                                 | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC | 07 Oct 24 12:44 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-053933 stop -v=7                                                            | ha-053933 | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:40:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:40:39.747308  772013 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:40:39.747607  772013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:40:39.747617  772013 out.go:358] Setting ErrFile to fd 2...
	I1007 12:40:39.747622  772013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:40:39.747884  772013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:40:39.748589  772013 out.go:352] Setting JSON to false
	I1007 12:40:39.749662  772013 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8589,"bootTime":1728296251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:40:39.749747  772013 start.go:139] virtualization: kvm guest
	I1007 12:40:39.752291  772013 out.go:177] * [ha-053933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:40:39.753891  772013 notify.go:220] Checking for updates...
	I1007 12:40:39.753925  772013 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:40:39.755658  772013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:40:39.757361  772013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:40:39.758836  772013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:40:39.760206  772013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:40:39.761581  772013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:40:39.763205  772013 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:40:39.763351  772013 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:40:39.763965  772013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:39.764046  772013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:39.780000  772013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I1007 12:40:39.780538  772013 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:39.781123  772013 main.go:141] libmachine: Using API Version  1
	I1007 12:40:39.781171  772013 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:39.781570  772013 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:39.781769  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:40:39.821261  772013 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:40:39.822755  772013 start.go:297] selected driver: kvm2
	I1007 12:40:39.822780  772013 start.go:901] validating driver "kvm2" against &{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:40:39.823003  772013 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:40:39.823397  772013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:40:39.823492  772013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:40:39.841212  772013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:40:39.841954  772013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:40:39.841999  772013 cni.go:84] Creating CNI manager for ""
	I1007 12:40:39.842120  772013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:40:39.842197  772013 start.go:340] cluster config:
	{Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:40:39.842376  772013 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:40:39.846266  772013 out.go:177] * Starting "ha-053933" primary control-plane node in "ha-053933" cluster
	I1007 12:40:39.847976  772013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:40:39.848039  772013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:40:39.848054  772013 cache.go:56] Caching tarball of preloaded images
	I1007 12:40:39.848215  772013 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:40:39.848226  772013 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:40:39.848408  772013 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/config.json ...
	I1007 12:40:39.848782  772013 start.go:360] acquireMachinesLock for ha-053933: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:40:39.848863  772013 start.go:364] duration metric: took 50.504µs to acquireMachinesLock for "ha-053933"
	I1007 12:40:39.848886  772013 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:40:39.848892  772013 fix.go:54] fixHost starting: 
	I1007 12:40:39.849220  772013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:39.849265  772013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:39.865058  772013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I1007 12:40:39.865482  772013 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:39.866044  772013 main.go:141] libmachine: Using API Version  1
	I1007 12:40:39.866092  772013 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:39.866443  772013 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:39.866684  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:40:39.866829  772013 main.go:141] libmachine: (ha-053933) Calling .GetState
	I1007 12:40:39.868870  772013 fix.go:112] recreateIfNeeded on ha-053933: state=Running err=<nil>
	W1007 12:40:39.868894  772013 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:40:39.872266  772013 out.go:177] * Updating the running kvm2 "ha-053933" VM ...
	I1007 12:40:39.873705  772013 machine.go:93] provisionDockerMachine start ...
	I1007 12:40:39.873768  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:40:39.874119  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:39.876740  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.877381  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:39.877407  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.877623  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:39.877830  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.878069  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.878238  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:39.878417  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:39.878668  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:39.878679  772013 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:40:39.995492  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:40:39.995523  772013 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:40:39.995803  772013 buildroot.go:166] provisioning hostname "ha-053933"
	I1007 12:40:39.995825  772013 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:40:39.995959  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:39.998791  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.999191  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:39.999219  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:39.999340  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:39.999538  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.999685  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:39.999809  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:39.999978  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:40.000163  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:40.000175  772013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-053933 && echo "ha-053933" | sudo tee /etc/hostname
	I1007 12:40:40.125677  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-053933
	
	I1007 12:40:40.125727  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.128959  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.129632  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.129665  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.129956  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:40.130240  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.130424  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.130606  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:40.130769  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:40.131003  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:40.131020  772013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-053933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-053933/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-053933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:40:40.255480  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:40:40.255514  772013 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 12:40:40.255535  772013 buildroot.go:174] setting up certificates
	I1007 12:40:40.255545  772013 provision.go:84] configureAuth start
	I1007 12:40:40.255554  772013 main.go:141] libmachine: (ha-053933) Calling .GetMachineName
	I1007 12:40:40.255897  772013 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:40:40.259325  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.259896  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.259948  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.260193  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.262975  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.263589  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.263620  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.263901  772013 provision.go:143] copyHostCerts
	I1007 12:40:40.263939  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:40:40.263982  772013 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 12:40:40.264006  772013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 12:40:40.264118  772013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 12:40:40.264257  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:40:40.264285  772013 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 12:40:40.264294  772013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 12:40:40.264341  772013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 12:40:40.264434  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:40:40.264459  772013 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 12:40:40.264463  772013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 12:40:40.264489  772013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 12:40:40.264577  772013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.ha-053933 san=[127.0.0.1 192.168.39.152 ha-053933 localhost minikube]
	I1007 12:40:40.323125  772013 provision.go:177] copyRemoteCerts
	I1007 12:40:40.323192  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:40:40.323223  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.326447  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.326858  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.326879  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.327208  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:40.327418  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.327580  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:40.327697  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:40:40.413859  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:40:40.413954  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:40:40.448110  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:40:40.448233  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:40:40.476030  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:40:40.476136  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:40:40.506002  772013 provision.go:87] duration metric: took 250.439962ms to configureAuth
	I1007 12:40:40.506060  772013 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:40:40.506333  772013 config.go:182] Loaded profile config "ha-053933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:40:40.506422  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:40:40.508896  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.509246  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:40:40.509268  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:40:40.509523  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:40:40.509750  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.509911  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:40:40.510093  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:40:40.510258  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:40:40.510474  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:40:40.510493  772013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:42:11.486751  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:42:11.486789  772013 machine.go:96] duration metric: took 1m31.613066866s to provisionDockerMachine
	I1007 12:42:11.486808  772013 start.go:293] postStartSetup for "ha-053933" (driver="kvm2")
	I1007 12:42:11.486820  772013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:42:11.486846  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.487320  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:42:11.487356  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.490903  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.491487  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.491520  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.491779  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.491985  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.492227  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.492405  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:42:11.579592  772013 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:42:11.584794  772013 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:42:11.584823  772013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 12:42:11.584921  772013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 12:42:11.585026  772013 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 12:42:11.585039  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 12:42:11.585153  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:42:11.597523  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:42:11.628871  772013 start.go:296] duration metric: took 142.045848ms for postStartSetup
	I1007 12:42:11.628941  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.629334  772013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:42:11.629371  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.632308  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.632713  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.632742  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.633037  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.633328  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.633547  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.633691  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	W1007 12:42:11.717149  772013 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 12:42:11.717193  772013 fix.go:56] duration metric: took 1m31.868300995s for fixHost
	I1007 12:42:11.717218  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.720699  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.721065  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.721098  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.721235  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.721476  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.721634  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.721791  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.722080  772013 main.go:141] libmachine: Using SSH client type: native
	I1007 12:42:11.722319  772013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I1007 12:42:11.722334  772013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:42:11.827248  772013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304931.796013380
	
	I1007 12:42:11.827281  772013 fix.go:216] guest clock: 1728304931.796013380
	I1007 12:42:11.827289  772013 fix.go:229] Guest: 2024-10-07 12:42:11.79601338 +0000 UTC Remote: 2024-10-07 12:42:11.717201256 +0000 UTC m=+92.017887815 (delta=78.812124ms)
	I1007 12:42:11.827310  772013 fix.go:200] guest clock delta is within tolerance: 78.812124ms
	I1007 12:42:11.827335  772013 start.go:83] releasing machines lock for "ha-053933", held for 1m31.978440416s
	I1007 12:42:11.827359  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.827670  772013 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:42:11.830278  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.830613  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.830639  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.830783  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.831340  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.831531  772013 main.go:141] libmachine: (ha-053933) Calling .DriverName
	I1007 12:42:11.831641  772013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:42:11.831687  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.831750  772013 ssh_runner.go:195] Run: cat /version.json
	I1007 12:42:11.831782  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHHostname
	I1007 12:42:11.834266  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.834630  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.834652  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.834671  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.834819  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.835017  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.835183  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.835223  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:11.835244  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:11.835331  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:42:11.835460  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHPort
	I1007 12:42:11.835624  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHKeyPath
	I1007 12:42:11.835805  772013 main.go:141] libmachine: (ha-053933) Calling .GetSSHUsername
	I1007 12:42:11.835951  772013 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/ha-053933/id_rsa Username:docker}
	I1007 12:42:11.937074  772013 ssh_runner.go:195] Run: systemctl --version
	I1007 12:42:11.943647  772013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:42:12.110936  772013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:42:12.120426  772013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:42:12.120517  772013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:42:12.130619  772013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:42:12.130652  772013 start.go:495] detecting cgroup driver to use...
	I1007 12:42:12.130740  772013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:42:12.149062  772013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:42:12.164923  772013 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:42:12.164999  772013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:42:12.179655  772013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:42:12.193778  772013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:42:12.347434  772013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:42:12.533189  772013 docker.go:233] disabling docker service ...
	I1007 12:42:12.533269  772013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:42:12.550270  772013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:42:12.565489  772013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:42:12.716554  772013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:42:12.867547  772013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:42:12.883421  772013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:42:12.905499  772013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:42:12.905570  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.917270  772013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:42:12.917337  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.929494  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.941341  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.952765  772013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:42:12.964956  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.977031  772013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:12.989852  772013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:42:13.001605  772013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:42:13.012274  772013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:42:13.022415  772013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:13.167592  772013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:42:14.577887  772013 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.410247757s)
	I1007 12:42:14.577932  772013 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:42:14.578011  772013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:42:14.583340  772013 start.go:563] Will wait 60s for crictl version
	I1007 12:42:14.583403  772013 ssh_runner.go:195] Run: which crictl
	I1007 12:42:14.587722  772013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:42:14.628066  772013 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:42:14.628176  772013 ssh_runner.go:195] Run: crio --version
	I1007 12:42:14.659543  772013 ssh_runner.go:195] Run: crio --version
	I1007 12:42:14.694014  772013 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:42:14.695726  772013 main.go:141] libmachine: (ha-053933) Calling .GetIP
	I1007 12:42:14.698789  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:14.699155  772013 main.go:141] libmachine: (ha-053933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:91:1b", ip: ""} in network mk-ha-053933: {Iface:virbr1 ExpiryTime:2024-10-07 13:31:32 +0000 UTC Type:0 Mac:52:54:00:7e:91:1b Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-053933 Clientid:01:52:54:00:7e:91:1b}
	I1007 12:42:14.699180  772013 main.go:141] libmachine: (ha-053933) DBG | domain ha-053933 has defined IP address 192.168.39.152 and MAC address 52:54:00:7e:91:1b in network mk-ha-053933
	I1007 12:42:14.699409  772013 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:42:14.704718  772013 kubeadm.go:883] updating cluster {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:42:14.704932  772013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:42:14.704982  772013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:42:14.751121  772013 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:42:14.751146  772013 crio.go:433] Images already preloaded, skipping extraction
	I1007 12:42:14.751196  772013 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:42:14.791881  772013 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:42:14.791909  772013 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:42:14.791920  772013 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.31.1 crio true true} ...
	I1007 12:42:14.792053  772013 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-053933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:42:14.792127  772013 ssh_runner.go:195] Run: crio config
	I1007 12:42:14.845441  772013 cni.go:84] Creating CNI manager for ""
	I1007 12:42:14.845466  772013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:42:14.845478  772013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:42:14.845504  772013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-053933 NodeName:ha-053933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:42:14.845654  772013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-053933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:42:14.845675  772013 kube-vip.go:115] generating kube-vip config ...
	I1007 12:42:14.845719  772013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:42:14.857822  772013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:42:14.857986  772013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:42:14.858072  772013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:42:14.868513  772013 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:42:14.868587  772013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:42:14.878678  772013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:42:14.898096  772013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:42:14.916929  772013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:42:14.935886  772013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:42:14.955589  772013 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:42:14.960312  772013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:15.110512  772013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:42:15.127901  772013 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933 for IP: 192.168.39.152
	I1007 12:42:15.127952  772013 certs.go:194] generating shared ca certs ...
	I1007 12:42:15.127972  772013 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:15.128186  772013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 12:42:15.128242  772013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 12:42:15.128257  772013 certs.go:256] generating profile certs ...
	I1007 12:42:15.128363  772013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/client.key
	I1007 12:42:15.128400  772013 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449
	I1007 12:42:15.128422  772013 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.227 192.168.39.53 192.168.39.254]
	I1007 12:42:15.355694  772013 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449 ...
	I1007 12:42:15.355740  772013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449: {Name:mk8ee9f722a829f235f87c2c2735b8033288c6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:15.355930  772013 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449 ...
	I1007 12:42:15.355945  772013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449: {Name:mke69c584b2945b40f89d2813d68d7bc38f89ffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:15.356017  772013 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt.8d6ba449 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt
	I1007 12:42:15.356158  772013 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key.8d6ba449 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key
	I1007 12:42:15.356296  772013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key
	I1007 12:42:15.356314  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:42:15.356328  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:42:15.356341  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:42:15.356354  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:42:15.356365  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:42:15.356377  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:42:15.356389  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:42:15.356399  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:42:15.356445  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 12:42:15.356476  772013 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 12:42:15.356485  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:42:15.356507  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:42:15.356528  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:42:15.356549  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 12:42:15.356586  772013 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 12:42:15.356611  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.356625  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.356638  772013 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.357272  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:42:15.385032  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:42:15.416498  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:42:15.442011  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:42:15.468627  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 12:42:15.496004  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:42:15.523681  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:42:15.551081  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/ha-053933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:42:15.578890  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 12:42:15.605869  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:42:15.630987  772013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 12:42:15.656204  772013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:42:15.674074  772013 ssh_runner.go:195] Run: openssl version
	I1007 12:42:15.680450  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:42:15.692649  772013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.697687  772013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.697781  772013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:15.704199  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:42:15.714248  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 12:42:15.725428  772013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.730244  772013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.730305  772013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 12:42:15.736310  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 12:42:15.746565  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 12:42:15.758662  772013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.763625  772013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.763701  772013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 12:42:15.770190  772013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:42:15.780976  772013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:42:15.786490  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:42:15.793734  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:42:15.800095  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:42:15.806303  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:42:15.812413  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:42:15.818165  772013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:42:15.824513  772013 kubeadm.go:392] StartCluster: {Name:ha-053933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-053933 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.244 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:42:15.824680  772013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:42:15.824730  772013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:42:15.870436  772013 cri.go:89] found id: "a0cbd21935c129b01b1598faa66de584fbfd95ba6d2d4550d57325208a54e86b"
	I1007 12:42:15.870469  772013 cri.go:89] found id: "41fb0ba54f670cd0ca5f39e057da080362b2cfa9d38a6da0262dcb0073427d52"
	I1007 12:42:15.870476  772013 cri.go:89] found id: "78f4113edc9360ea5eeadd4314b5b4b87c16309da4db3db1eb9b9a7e0da0e78b"
	I1007 12:42:15.870481  772013 cri.go:89] found id: "2867817e1f48022c3f222e803961f4096cafab8ad683264d2f47d75b3aab36b4"
	I1007 12:42:15.870485  772013 cri.go:89] found id: "35044c701c165d8c437dd5fb67ffd682a77bda9d4d10c98e815d7b4774ec91c5"
	I1007 12:42:15.870490  772013 cri.go:89] found id: "3da0371dd728786821ec2726dc66484ede5f929bbee31c90d035d6f7cba5d416"
	I1007 12:42:15.870494  772013 cri.go:89] found id: "65adc93f12fb71ff467df8b9f089cf2a537b22de8101316cf150a283991a215c"
	I1007 12:42:15.870498  772013 cri.go:89] found id: "aea74cdff9eee8b24311abb725f6208096c7e0037abc993d70cf2b24c1038437"
	I1007 12:42:15.870502  772013 cri.go:89] found id: "e756202203ed3be95119696f1e5d7bc94ea8e4a604b3ba8a11a10aaefade8edd"
	I1007 12:42:15.870512  772013 cri.go:89] found id: "f190ed8ea3a7d0bcc8ad9ef86f5363858223610c4e976fbb5ce15155f510d255"
	I1007 12:42:15.870517  772013 cri.go:89] found id: "096488f00109216526de5556fd62486cbcde594daad6bcf7e6cef2dc644a0525"
	I1007 12:42:15.870521  772013 cri.go:89] found id: "fe11729317aca92ac74fdbbd4b9c746a9eeead6dc404d65c371290427601a866"
	I1007 12:42:15.870525  772013 cri.go:89] found id: "a23f58b62cf7ab53599ef1a1f44a99519aadfa671c447fd5d9a50391c29cde38"
	I1007 12:42:15.870529  772013 cri.go:89] found id: ""
	I1007 12:42:15.870584  772013 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-053933 -n ha-053933
helpers_test.go:261: (dbg) Run:  kubectl --context ha-053933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (319.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-723069
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-723069
E1007 13:03:16.769232  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-723069: exit status 82 (2m1.748457482s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-723069-m03"  ...
	* Stopping node "multinode-723069-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-723069" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-723069 --wait=true -v=8 --alsologtostderr
E1007 13:04:53.454211  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:05:13.698643  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-723069 --wait=true -v=8 --alsologtostderr: (3m15.04763715s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-723069
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-723069 -n multinode-723069
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 logs -n 25: (2.139794149s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2261398320/001/cp-test_multinode-723069-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069:/home/docker/cp-test_multinode-723069-m02_multinode-723069.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069 sudo cat                                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m02_multinode-723069.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03:/home/docker/cp-test_multinode-723069-m02_multinode-723069-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069-m03 sudo cat                                   | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m02_multinode-723069-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp testdata/cp-test.txt                                                | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2261398320/001/cp-test_multinode-723069-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069:/home/docker/cp-test_multinode-723069-m03_multinode-723069.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069 sudo cat                                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m03_multinode-723069.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02:/home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069-m02 sudo cat                                   | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-723069 node stop m03                                                          | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	| node    | multinode-723069 node start                                                             | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	| stop    | -p multinode-723069                                                                     | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	| start   | -p multinode-723069                                                                     | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:04 UTC | 07 Oct 24 13:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:04:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:04:28.054395  783699 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:04:28.054537  783699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:04:28.054545  783699 out.go:358] Setting ErrFile to fd 2...
	I1007 13:04:28.054550  783699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:04:28.054719  783699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:04:28.055304  783699 out.go:352] Setting JSON to false
	I1007 13:04:28.056283  783699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10017,"bootTime":1728296251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:04:28.056433  783699 start.go:139] virtualization: kvm guest
	I1007 13:04:28.058923  783699 out.go:177] * [multinode-723069] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:04:28.060685  783699 notify.go:220] Checking for updates...
	I1007 13:04:28.060746  783699 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:04:28.062744  783699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:04:28.064378  783699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:04:28.065747  783699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:04:28.067098  783699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:04:28.068411  783699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:04:28.070442  783699 config.go:182] Loaded profile config "multinode-723069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:28.070632  783699 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:04:28.071408  783699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:04:28.071511  783699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:04:28.087822  783699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1007 13:04:28.088381  783699 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:04:28.089091  783699 main.go:141] libmachine: Using API Version  1
	I1007 13:04:28.089123  783699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:04:28.089606  783699 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:04:28.089854  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:04:28.126566  783699 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:04:28.127855  783699 start.go:297] selected driver: kvm2
	I1007 13:04:28.127875  783699 start.go:901] validating driver "kvm2" against &{Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:04:28.128046  783699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:04:28.128374  783699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:04:28.128493  783699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:04:28.144272  783699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:04:28.144968  783699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:04:28.145004  783699 cni.go:84] Creating CNI manager for ""
	I1007 13:04:28.145076  783699 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 13:04:28.145131  783699 start.go:340] cluster config:
	{Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-723069 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubefl
ow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:04:28.145259  783699 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:04:28.147227  783699 out.go:177] * Starting "multinode-723069" primary control-plane node in "multinode-723069" cluster
	I1007 13:04:28.148415  783699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:04:28.148474  783699 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:04:28.148487  783699 cache.go:56] Caching tarball of preloaded images
	I1007 13:04:28.148576  783699 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:04:28.148589  783699 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:04:28.148754  783699 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/config.json ...
	I1007 13:04:28.149005  783699 start.go:360] acquireMachinesLock for multinode-723069: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:04:28.149060  783699 start.go:364] duration metric: took 30.127µs to acquireMachinesLock for "multinode-723069"
	I1007 13:04:28.149082  783699 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:04:28.149092  783699 fix.go:54] fixHost starting: 
	I1007 13:04:28.149386  783699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:04:28.149419  783699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:04:28.164793  783699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1007 13:04:28.165339  783699 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:04:28.165927  783699 main.go:141] libmachine: Using API Version  1
	I1007 13:04:28.165955  783699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:04:28.166310  783699 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:04:28.166500  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:04:28.166639  783699 main.go:141] libmachine: (multinode-723069) Calling .GetState
	I1007 13:04:28.168205  783699 fix.go:112] recreateIfNeeded on multinode-723069: state=Running err=<nil>
	W1007 13:04:28.168234  783699 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:04:28.170422  783699 out.go:177] * Updating the running kvm2 "multinode-723069" VM ...
	I1007 13:04:28.172040  783699 machine.go:93] provisionDockerMachine start ...
	I1007 13:04:28.172089  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:04:28.172430  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.175256  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.175708  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.175744  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.175991  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.176211  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.176369  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.176522  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.176718  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.176980  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.176996  783699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:04:28.287771  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-723069
	
	I1007 13:04:28.287814  783699 main.go:141] libmachine: (multinode-723069) Calling .GetMachineName
	I1007 13:04:28.288104  783699 buildroot.go:166] provisioning hostname "multinode-723069"
	I1007 13:04:28.288139  783699 main.go:141] libmachine: (multinode-723069) Calling .GetMachineName
	I1007 13:04:28.288356  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.292009  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.292574  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.292609  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.292871  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.293127  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.293384  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.293621  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.293824  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.294017  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.294059  783699 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-723069 && echo "multinode-723069" | sudo tee /etc/hostname
	I1007 13:04:28.418976  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-723069
	
	I1007 13:04:28.419010  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.422176  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.422536  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.422577  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.422739  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.422949  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.423122  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.423240  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.423402  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.423594  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.423610  783699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-723069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-723069/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-723069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:04:28.541984  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:04:28.542149  783699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:04:28.542191  783699 buildroot.go:174] setting up certificates
	I1007 13:04:28.542202  783699 provision.go:84] configureAuth start
	I1007 13:04:28.542229  783699 main.go:141] libmachine: (multinode-723069) Calling .GetMachineName
	I1007 13:04:28.542545  783699 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:04:28.545089  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.545550  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.545579  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.545700  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.548731  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.549138  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.549189  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.549369  783699 provision.go:143] copyHostCerts
	I1007 13:04:28.549405  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:04:28.549454  783699 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:04:28.549473  783699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:04:28.549559  783699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:04:28.549673  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:04:28.549719  783699 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:04:28.549729  783699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:04:28.549766  783699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:04:28.549845  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:04:28.549867  783699 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:04:28.549889  783699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:04:28.549926  783699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:04:28.550011  783699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.multinode-723069 san=[127.0.0.1 192.168.39.213 localhost minikube multinode-723069]
	I1007 13:04:28.730114  783699 provision.go:177] copyRemoteCerts
	I1007 13:04:28.730180  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:04:28.730211  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.733076  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.733472  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.733501  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.733735  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.733983  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.734204  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.734364  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:04:28.820893  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 13:04:28.820974  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:04:28.848792  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 13:04:28.848871  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1007 13:04:28.875218  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 13:04:28.875293  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:04:28.906647  783699 provision.go:87] duration metric: took 364.424362ms to configureAuth
	I1007 13:04:28.906681  783699 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:04:28.906949  783699 config.go:182] Loaded profile config "multinode-723069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:28.907045  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.910284  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.910727  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.910774  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.911005  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.911239  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.911417  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.911651  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.911868  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.912102  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.912121  783699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:05:59.782562  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:05:59.782602  783699 machine.go:96] duration metric: took 1m31.610527251s to provisionDockerMachine
	I1007 13:05:59.782621  783699 start.go:293] postStartSetup for "multinode-723069" (driver="kvm2")
	I1007 13:05:59.782633  783699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:05:59.782657  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:05:59.783010  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:05:59.783058  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:05:59.786312  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.786727  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:05:59.786750  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.786989  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:05:59.787190  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:05:59.787409  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:05:59.787570  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:05:59.878722  783699 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:05:59.883606  783699 command_runner.go:130] > NAME=Buildroot
	I1007 13:05:59.883640  783699 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1007 13:05:59.883644  783699 command_runner.go:130] > ID=buildroot
	I1007 13:05:59.883650  783699 command_runner.go:130] > VERSION_ID=2023.02.9
	I1007 13:05:59.883657  783699 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1007 13:05:59.883710  783699 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:05:59.883728  783699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:05:59.883810  783699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:05:59.883901  783699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:05:59.883917  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 13:05:59.884032  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:05:59.894766  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:05:59.922063  783699 start.go:296] duration metric: took 139.39458ms for postStartSetup
	I1007 13:05:59.922116  783699 fix.go:56] duration metric: took 1m31.77302452s for fixHost
	I1007 13:05:59.922149  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:05:59.924790  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.925211  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:05:59.925240  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.925407  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:05:59.925593  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:05:59.925768  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:05:59.925884  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:05:59.926018  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:05:59.926235  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:05:59.926249  783699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:06:00.039492  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728306360.019052140
	
	I1007 13:06:00.039518  783699 fix.go:216] guest clock: 1728306360.019052140
	I1007 13:06:00.039528  783699 fix.go:229] Guest: 2024-10-07 13:06:00.01905214 +0000 UTC Remote: 2024-10-07 13:05:59.922121693 +0000 UTC m=+91.912039561 (delta=96.930447ms)
	I1007 13:06:00.039582  783699 fix.go:200] guest clock delta is within tolerance: 96.930447ms
	I1007 13:06:00.039591  783699 start.go:83] releasing machines lock for "multinode-723069", held for 1m31.890517559s
	I1007 13:06:00.039621  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.039900  783699 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:06:00.042532  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.042914  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:00.042943  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.043169  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.043744  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.043968  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.044079  783699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:06:00.044137  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:06:00.044197  783699 ssh_runner.go:195] Run: cat /version.json
	I1007 13:06:00.044225  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:06:00.047010  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047099  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047499  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:00.047527  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047557  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:00.047580  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047744  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:06:00.047855  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:06:00.047985  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:06:00.048054  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:06:00.048127  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:06:00.048213  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:06:00.048231  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:06:00.048312  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:06:00.127244  783699 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1007 13:06:00.127486  783699 ssh_runner.go:195] Run: systemctl --version
	I1007 13:06:00.152127  783699 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1007 13:06:00.152296  783699 command_runner.go:130] > systemd 252 (252)
	I1007 13:06:00.152329  783699 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1007 13:06:00.152399  783699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:06:00.323524  783699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:06:00.340655  783699 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1007 13:06:00.340784  783699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:06:00.340850  783699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:06:00.354200  783699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:06:00.354232  783699 start.go:495] detecting cgroup driver to use...
	I1007 13:06:00.354315  783699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:06:00.378061  783699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:06:00.395558  783699 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:06:00.395624  783699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:06:00.413561  783699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:06:00.430094  783699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:06:00.587019  783699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:06:00.739463  783699 docker.go:233] disabling docker service ...
	I1007 13:06:00.739555  783699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:06:00.759239  783699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:06:00.775069  783699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:06:00.922268  783699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:06:01.069009  783699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:06:01.085523  783699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:06:01.106139  783699 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1007 13:06:01.106581  783699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:06:01.106658  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.119002  783699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:06:01.119083  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.131221  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.142937  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.154663  783699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:06:01.166258  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.177809  783699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.189505  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.201249  783699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:06:01.211805  783699 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1007 13:06:01.211893  783699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:06:01.222500  783699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:01.375887  783699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:06:01.621493  783699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:06:01.621566  783699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:06:01.626610  783699 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1007 13:06:01.626637  783699 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1007 13:06:01.626647  783699 command_runner.go:130] > Device: 0,22	Inode: 1342        Links: 1
	I1007 13:06:01.626657  783699 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 13:06:01.626675  783699 command_runner.go:130] > Access: 2024-10-07 13:06:01.534557584 +0000
	I1007 13:06:01.626685  783699 command_runner.go:130] > Modify: 2024-10-07 13:06:01.458555918 +0000
	I1007 13:06:01.626694  783699 command_runner.go:130] > Change: 2024-10-07 13:06:01.458555918 +0000
	I1007 13:06:01.626703  783699 command_runner.go:130] >  Birth: -
	I1007 13:06:01.626833  783699 start.go:563] Will wait 60s for crictl version
	I1007 13:06:01.626901  783699 ssh_runner.go:195] Run: which crictl
	I1007 13:06:01.631301  783699 command_runner.go:130] > /usr/bin/crictl
	I1007 13:06:01.631378  783699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:06:01.676219  783699 command_runner.go:130] > Version:  0.1.0
	I1007 13:06:01.676245  783699 command_runner.go:130] > RuntimeName:  cri-o
	I1007 13:06:01.676249  783699 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1007 13:06:01.676256  783699 command_runner.go:130] > RuntimeApiVersion:  v1
	I1007 13:06:01.677366  783699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:06:01.677462  783699 ssh_runner.go:195] Run: crio --version
	I1007 13:06:01.706679  783699 command_runner.go:130] > crio version 1.29.1
	I1007 13:06:01.706703  783699 command_runner.go:130] > Version:        1.29.1
	I1007 13:06:01.706708  783699 command_runner.go:130] > GitCommit:      unknown
	I1007 13:06:01.706713  783699 command_runner.go:130] > GitCommitDate:  unknown
	I1007 13:06:01.706717  783699 command_runner.go:130] > GitTreeState:   clean
	I1007 13:06:01.706723  783699 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 13:06:01.706727  783699 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 13:06:01.706731  783699 command_runner.go:130] > Compiler:       gc
	I1007 13:06:01.706736  783699 command_runner.go:130] > Platform:       linux/amd64
	I1007 13:06:01.706740  783699 command_runner.go:130] > Linkmode:       dynamic
	I1007 13:06:01.706754  783699 command_runner.go:130] > BuildTags:      
	I1007 13:06:01.706758  783699 command_runner.go:130] >   containers_image_ostree_stub
	I1007 13:06:01.706762  783699 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 13:06:01.706766  783699 command_runner.go:130] >   btrfs_noversion
	I1007 13:06:01.706771  783699 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 13:06:01.706775  783699 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 13:06:01.706779  783699 command_runner.go:130] >   seccomp
	I1007 13:06:01.706783  783699 command_runner.go:130] > LDFlags:          unknown
	I1007 13:06:01.706788  783699 command_runner.go:130] > SeccompEnabled:   true
	I1007 13:06:01.706796  783699 command_runner.go:130] > AppArmorEnabled:  false
	I1007 13:06:01.708057  783699 ssh_runner.go:195] Run: crio --version
	I1007 13:06:01.739495  783699 command_runner.go:130] > crio version 1.29.1
	I1007 13:06:01.739522  783699 command_runner.go:130] > Version:        1.29.1
	I1007 13:06:01.739529  783699 command_runner.go:130] > GitCommit:      unknown
	I1007 13:06:01.739533  783699 command_runner.go:130] > GitCommitDate:  unknown
	I1007 13:06:01.739538  783699 command_runner.go:130] > GitTreeState:   clean
	I1007 13:06:01.739543  783699 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 13:06:01.739547  783699 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 13:06:01.739551  783699 command_runner.go:130] > Compiler:       gc
	I1007 13:06:01.739556  783699 command_runner.go:130] > Platform:       linux/amd64
	I1007 13:06:01.739563  783699 command_runner.go:130] > Linkmode:       dynamic
	I1007 13:06:01.739570  783699 command_runner.go:130] > BuildTags:      
	I1007 13:06:01.739575  783699 command_runner.go:130] >   containers_image_ostree_stub
	I1007 13:06:01.739582  783699 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 13:06:01.739588  783699 command_runner.go:130] >   btrfs_noversion
	I1007 13:06:01.739596  783699 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 13:06:01.739603  783699 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 13:06:01.739610  783699 command_runner.go:130] >   seccomp
	I1007 13:06:01.739620  783699 command_runner.go:130] > LDFlags:          unknown
	I1007 13:06:01.739627  783699 command_runner.go:130] > SeccompEnabled:   true
	I1007 13:06:01.739634  783699 command_runner.go:130] > AppArmorEnabled:  false
	I1007 13:06:01.742800  783699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:06:01.744347  783699 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:06:01.747228  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:01.747670  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:01.747711  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:01.747968  783699 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 13:06:01.752393  783699 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1007 13:06:01.752509  783699 kubeadm.go:883] updating cluster {Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:06:01.752675  783699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:06:01.752729  783699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:01.798518  783699 command_runner.go:130] > {
	I1007 13:06:01.798551  783699 command_runner.go:130] >   "images": [
	I1007 13:06:01.798557  783699 command_runner.go:130] >     {
	I1007 13:06:01.798568  783699 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 13:06:01.798575  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798586  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 13:06:01.798592  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798598  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798610  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 13:06:01.798621  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 13:06:01.798627  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798632  783699 command_runner.go:130] >       "size": "87190579",
	I1007 13:06:01.798636  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798640  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798646  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798650  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798653  783699 command_runner.go:130] >     },
	I1007 13:06:01.798657  783699 command_runner.go:130] >     {
	I1007 13:06:01.798663  783699 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 13:06:01.798671  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798676  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 13:06:01.798681  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798685  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798694  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 13:06:01.798705  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 13:06:01.798711  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798716  783699 command_runner.go:130] >       "size": "1363676",
	I1007 13:06:01.798723  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798730  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798734  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798740  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798743  783699 command_runner.go:130] >     },
	I1007 13:06:01.798747  783699 command_runner.go:130] >     {
	I1007 13:06:01.798757  783699 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 13:06:01.798762  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798766  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 13:06:01.798770  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798774  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798781  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 13:06:01.798789  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 13:06:01.798793  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798797  783699 command_runner.go:130] >       "size": "31470524",
	I1007 13:06:01.798802  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798806  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798809  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798814  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798820  783699 command_runner.go:130] >     },
	I1007 13:06:01.798825  783699 command_runner.go:130] >     {
	I1007 13:06:01.798831  783699 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 13:06:01.798835  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798841  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 13:06:01.798844  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798850  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798856  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 13:06:01.798871  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 13:06:01.798875  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798878  783699 command_runner.go:130] >       "size": "63273227",
	I1007 13:06:01.798882  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798886  783699 command_runner.go:130] >       "username": "nonroot",
	I1007 13:06:01.798891  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798895  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798899  783699 command_runner.go:130] >     },
	I1007 13:06:01.798903  783699 command_runner.go:130] >     {
	I1007 13:06:01.798909  783699 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 13:06:01.798915  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798920  783699 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 13:06:01.798927  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798931  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798940  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 13:06:01.798947  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 13:06:01.798954  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798959  783699 command_runner.go:130] >       "size": "149009664",
	I1007 13:06:01.798965  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.798969  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.798973  783699 command_runner.go:130] >       },
	I1007 13:06:01.798977  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798981  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798985  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798991  783699 command_runner.go:130] >     },
	I1007 13:06:01.798994  783699 command_runner.go:130] >     {
	I1007 13:06:01.799000  783699 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 13:06:01.799006  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799011  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 13:06:01.799014  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799018  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799025  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 13:06:01.799037  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 13:06:01.799044  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799050  783699 command_runner.go:130] >       "size": "95237600",
	I1007 13:06:01.799054  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799059  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.799063  783699 command_runner.go:130] >       },
	I1007 13:06:01.799069  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799073  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799078  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799082  783699 command_runner.go:130] >     },
	I1007 13:06:01.799085  783699 command_runner.go:130] >     {
	I1007 13:06:01.799093  783699 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 13:06:01.799097  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799104  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 13:06:01.799110  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799114  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799124  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 13:06:01.799134  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 13:06:01.799137  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799142  783699 command_runner.go:130] >       "size": "89437508",
	I1007 13:06:01.799146  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799150  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.799154  783699 command_runner.go:130] >       },
	I1007 13:06:01.799161  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799165  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799171  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799175  783699 command_runner.go:130] >     },
	I1007 13:06:01.799179  783699 command_runner.go:130] >     {
	I1007 13:06:01.799185  783699 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 13:06:01.799191  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799196  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 13:06:01.799201  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799205  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799221  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 13:06:01.799231  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 13:06:01.799234  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799241  783699 command_runner.go:130] >       "size": "92733849",
	I1007 13:06:01.799245  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.799252  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799256  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799259  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799262  783699 command_runner.go:130] >     },
	I1007 13:06:01.799265  783699 command_runner.go:130] >     {
	I1007 13:06:01.799271  783699 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 13:06:01.799274  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799279  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 13:06:01.799283  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799287  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799295  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 13:06:01.799302  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 13:06:01.799305  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799308  783699 command_runner.go:130] >       "size": "68420934",
	I1007 13:06:01.799312  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799315  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.799318  783699 command_runner.go:130] >       },
	I1007 13:06:01.799321  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799325  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799329  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799332  783699 command_runner.go:130] >     },
	I1007 13:06:01.799335  783699 command_runner.go:130] >     {
	I1007 13:06:01.799340  783699 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 13:06:01.799344  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799348  783699 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 13:06:01.799351  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799355  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799361  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 13:06:01.799370  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 13:06:01.799376  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799379  783699 command_runner.go:130] >       "size": "742080",
	I1007 13:06:01.799385  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799390  783699 command_runner.go:130] >         "value": "65535"
	I1007 13:06:01.799396  783699 command_runner.go:130] >       },
	I1007 13:06:01.799400  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799406  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799410  783699 command_runner.go:130] >       "pinned": true
	I1007 13:06:01.799416  783699 command_runner.go:130] >     }
	I1007 13:06:01.799420  783699 command_runner.go:130] >   ]
	I1007 13:06:01.799425  783699 command_runner.go:130] > }
	I1007 13:06:01.799607  783699 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:01.799620  783699 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:06:01.799670  783699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:01.833915  783699 command_runner.go:130] > {
	I1007 13:06:01.833944  783699 command_runner.go:130] >   "images": [
	I1007 13:06:01.833949  783699 command_runner.go:130] >     {
	I1007 13:06:01.833957  783699 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 13:06:01.833962  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.833968  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 13:06:01.833971  783699 command_runner.go:130] >       ],
	I1007 13:06:01.833975  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.833983  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 13:06:01.833990  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 13:06:01.833993  783699 command_runner.go:130] >       ],
	I1007 13:06:01.833998  783699 command_runner.go:130] >       "size": "87190579",
	I1007 13:06:01.834001  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834005  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834037  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834045  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834059  783699 command_runner.go:130] >     },
	I1007 13:06:01.834064  783699 command_runner.go:130] >     {
	I1007 13:06:01.834072  783699 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 13:06:01.834078  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834090  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 13:06:01.834096  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834103  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834110  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 13:06:01.834118  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 13:06:01.834122  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834128  783699 command_runner.go:130] >       "size": "1363676",
	I1007 13:06:01.834132  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834139  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834144  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834148  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834153  783699 command_runner.go:130] >     },
	I1007 13:06:01.834156  783699 command_runner.go:130] >     {
	I1007 13:06:01.834162  783699 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 13:06:01.834167  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834172  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 13:06:01.834175  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834179  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834189  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 13:06:01.834199  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 13:06:01.834203  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834207  783699 command_runner.go:130] >       "size": "31470524",
	I1007 13:06:01.834210  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834214  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834218  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834222  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834225  783699 command_runner.go:130] >     },
	I1007 13:06:01.834229  783699 command_runner.go:130] >     {
	I1007 13:06:01.834235  783699 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 13:06:01.834240  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834244  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 13:06:01.834247  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834252  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834259  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 13:06:01.834270  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 13:06:01.834274  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834278  783699 command_runner.go:130] >       "size": "63273227",
	I1007 13:06:01.834282  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834287  783699 command_runner.go:130] >       "username": "nonroot",
	I1007 13:06:01.834294  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834299  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834302  783699 command_runner.go:130] >     },
	I1007 13:06:01.834305  783699 command_runner.go:130] >     {
	I1007 13:06:01.834311  783699 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 13:06:01.834316  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834321  783699 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 13:06:01.834327  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834331  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834337  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 13:06:01.834345  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 13:06:01.834349  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834353  783699 command_runner.go:130] >       "size": "149009664",
	I1007 13:06:01.834357  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834361  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834364  783699 command_runner.go:130] >       },
	I1007 13:06:01.834368  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834372  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834377  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834380  783699 command_runner.go:130] >     },
	I1007 13:06:01.834384  783699 command_runner.go:130] >     {
	I1007 13:06:01.834390  783699 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 13:06:01.834394  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834399  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 13:06:01.834403  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834407  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834417  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 13:06:01.834424  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 13:06:01.834429  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834433  783699 command_runner.go:130] >       "size": "95237600",
	I1007 13:06:01.834439  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834444  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834447  783699 command_runner.go:130] >       },
	I1007 13:06:01.834451  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834455  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834460  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834464  783699 command_runner.go:130] >     },
	I1007 13:06:01.834467  783699 command_runner.go:130] >     {
	I1007 13:06:01.834473  783699 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 13:06:01.834478  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834483  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 13:06:01.834486  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834491  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834498  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 13:06:01.834507  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 13:06:01.834513  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834517  783699 command_runner.go:130] >       "size": "89437508",
	I1007 13:06:01.834520  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834524  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834528  783699 command_runner.go:130] >       },
	I1007 13:06:01.834532  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834537  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834541  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834546  783699 command_runner.go:130] >     },
	I1007 13:06:01.834549  783699 command_runner.go:130] >     {
	I1007 13:06:01.834555  783699 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 13:06:01.834561  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834566  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 13:06:01.834571  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834574  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834589  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 13:06:01.834599  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 13:06:01.834603  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834607  783699 command_runner.go:130] >       "size": "92733849",
	I1007 13:06:01.834611  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834617  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834621  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834625  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834629  783699 command_runner.go:130] >     },
	I1007 13:06:01.834632  783699 command_runner.go:130] >     {
	I1007 13:06:01.834638  783699 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 13:06:01.834651  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834658  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 13:06:01.834662  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834668  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834675  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 13:06:01.834684  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 13:06:01.834687  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834691  783699 command_runner.go:130] >       "size": "68420934",
	I1007 13:06:01.834695  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834699  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834703  783699 command_runner.go:130] >       },
	I1007 13:06:01.834707  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834713  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834718  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834721  783699 command_runner.go:130] >     },
	I1007 13:06:01.834725  783699 command_runner.go:130] >     {
	I1007 13:06:01.834730  783699 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 13:06:01.834736  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834740  783699 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 13:06:01.834744  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834748  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834756  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 13:06:01.834765  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 13:06:01.834769  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834772  783699 command_runner.go:130] >       "size": "742080",
	I1007 13:06:01.834779  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834783  783699 command_runner.go:130] >         "value": "65535"
	I1007 13:06:01.834787  783699 command_runner.go:130] >       },
	I1007 13:06:01.834791  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834794  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834798  783699 command_runner.go:130] >       "pinned": true
	I1007 13:06:01.834803  783699 command_runner.go:130] >     }
	I1007 13:06:01.834808  783699 command_runner.go:130] >   ]
	I1007 13:06:01.834812  783699 command_runner.go:130] > }
	I1007 13:06:01.835649  783699 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:01.835672  783699 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:06:01.835681  783699 kubeadm.go:934] updating node { 192.168.39.213 8443 v1.31.1 crio true true} ...
	I1007 13:06:01.835785  783699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-723069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:06:01.835859  783699 ssh_runner.go:195] Run: crio config
	I1007 13:06:01.872651  783699 command_runner.go:130] ! time="2024-10-07 13:06:01.852324614Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1007 13:06:01.878179  783699 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1007 13:06:01.886755  783699 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1007 13:06:01.886787  783699 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1007 13:06:01.886797  783699 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1007 13:06:01.886807  783699 command_runner.go:130] > #
	I1007 13:06:01.886817  783699 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1007 13:06:01.886826  783699 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1007 13:06:01.886833  783699 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1007 13:06:01.886842  783699 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1007 13:06:01.886846  783699 command_runner.go:130] > # reload'.
	I1007 13:06:01.886856  783699 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1007 13:06:01.886865  783699 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1007 13:06:01.886875  783699 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1007 13:06:01.886887  783699 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1007 13:06:01.886909  783699 command_runner.go:130] > [crio]
	I1007 13:06:01.886919  783699 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1007 13:06:01.886923  783699 command_runner.go:130] > # containers images, in this directory.
	I1007 13:06:01.886928  783699 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1007 13:06:01.886939  783699 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1007 13:06:01.886945  783699 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1007 13:06:01.886953  783699 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1007 13:06:01.886959  783699 command_runner.go:130] > # imagestore = ""
	I1007 13:06:01.886965  783699 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1007 13:06:01.886973  783699 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1007 13:06:01.886977  783699 command_runner.go:130] > storage_driver = "overlay"
	I1007 13:06:01.886983  783699 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1007 13:06:01.886988  783699 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1007 13:06:01.886993  783699 command_runner.go:130] > storage_option = [
	I1007 13:06:01.886997  783699 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1007 13:06:01.887000  783699 command_runner.go:130] > ]
	I1007 13:06:01.887006  783699 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1007 13:06:01.887014  783699 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1007 13:06:01.887018  783699 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1007 13:06:01.887023  783699 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1007 13:06:01.887031  783699 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1007 13:06:01.887035  783699 command_runner.go:130] > # always happen on a node reboot
	I1007 13:06:01.887040  783699 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1007 13:06:01.887053  783699 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1007 13:06:01.887061  783699 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1007 13:06:01.887066  783699 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1007 13:06:01.887072  783699 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1007 13:06:01.887081  783699 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1007 13:06:01.887088  783699 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1007 13:06:01.887094  783699 command_runner.go:130] > # internal_wipe = true
	I1007 13:06:01.887102  783699 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1007 13:06:01.887109  783699 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1007 13:06:01.887113  783699 command_runner.go:130] > # internal_repair = false
	I1007 13:06:01.887120  783699 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1007 13:06:01.887126  783699 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1007 13:06:01.887133  783699 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1007 13:06:01.887139  783699 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1007 13:06:01.887149  783699 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1007 13:06:01.887156  783699 command_runner.go:130] > [crio.api]
	I1007 13:06:01.887161  783699 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1007 13:06:01.887168  783699 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1007 13:06:01.887173  783699 command_runner.go:130] > # IP address on which the stream server will listen.
	I1007 13:06:01.887181  783699 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1007 13:06:01.887189  783699 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1007 13:06:01.887196  783699 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1007 13:06:01.887200  783699 command_runner.go:130] > # stream_port = "0"
	I1007 13:06:01.887207  783699 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1007 13:06:01.887212  783699 command_runner.go:130] > # stream_enable_tls = false
	I1007 13:06:01.887220  783699 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1007 13:06:01.887227  783699 command_runner.go:130] > # stream_idle_timeout = ""
	I1007 13:06:01.887233  783699 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1007 13:06:01.887244  783699 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1007 13:06:01.887249  783699 command_runner.go:130] > # minutes.
	I1007 13:06:01.887253  783699 command_runner.go:130] > # stream_tls_cert = ""
	I1007 13:06:01.887261  783699 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1007 13:06:01.887268  783699 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1007 13:06:01.887274  783699 command_runner.go:130] > # stream_tls_key = ""
	I1007 13:06:01.887280  783699 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1007 13:06:01.887288  783699 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1007 13:06:01.887302  783699 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1007 13:06:01.887308  783699 command_runner.go:130] > # stream_tls_ca = ""
	I1007 13:06:01.887315  783699 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 13:06:01.887322  783699 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1007 13:06:01.887330  783699 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 13:06:01.887337  783699 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1007 13:06:01.887343  783699 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1007 13:06:01.887351  783699 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1007 13:06:01.887357  783699 command_runner.go:130] > [crio.runtime]
	I1007 13:06:01.887363  783699 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1007 13:06:01.887370  783699 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1007 13:06:01.887376  783699 command_runner.go:130] > # "nofile=1024:2048"
	I1007 13:06:01.887381  783699 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1007 13:06:01.887387  783699 command_runner.go:130] > # default_ulimits = [
	I1007 13:06:01.887391  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887397  783699 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1007 13:06:01.887404  783699 command_runner.go:130] > # no_pivot = false
	I1007 13:06:01.887412  783699 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1007 13:06:01.887420  783699 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1007 13:06:01.887425  783699 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1007 13:06:01.887431  783699 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1007 13:06:01.887437  783699 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1007 13:06:01.887443  783699 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 13:06:01.887450  783699 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1007 13:06:01.887454  783699 command_runner.go:130] > # Cgroup setting for conmon
	I1007 13:06:01.887463  783699 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1007 13:06:01.887469  783699 command_runner.go:130] > conmon_cgroup = "pod"
	I1007 13:06:01.887475  783699 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1007 13:06:01.887482  783699 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1007 13:06:01.887488  783699 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 13:06:01.887494  783699 command_runner.go:130] > conmon_env = [
	I1007 13:06:01.887499  783699 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 13:06:01.887504  783699 command_runner.go:130] > ]
	I1007 13:06:01.887509  783699 command_runner.go:130] > # Additional environment variables to set for all the
	I1007 13:06:01.887518  783699 command_runner.go:130] > # containers. These are overridden if set in the
	I1007 13:06:01.887526  783699 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1007 13:06:01.887532  783699 command_runner.go:130] > # default_env = [
	I1007 13:06:01.887536  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887544  783699 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1007 13:06:01.887552  783699 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1007 13:06:01.887558  783699 command_runner.go:130] > # selinux = false
	I1007 13:06:01.887564  783699 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1007 13:06:01.887572  783699 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1007 13:06:01.887580  783699 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1007 13:06:01.887587  783699 command_runner.go:130] > # seccomp_profile = ""
	I1007 13:06:01.887592  783699 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1007 13:06:01.887599  783699 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1007 13:06:01.887605  783699 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1007 13:06:01.887625  783699 command_runner.go:130] > # which might increase security.
	I1007 13:06:01.887638  783699 command_runner.go:130] > # This option is currently deprecated,
	I1007 13:06:01.887644  783699 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1007 13:06:01.887648  783699 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1007 13:06:01.887654  783699 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1007 13:06:01.887663  783699 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1007 13:06:01.887673  783699 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1007 13:06:01.887682  783699 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1007 13:06:01.887689  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.887693  783699 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1007 13:06:01.887701  783699 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1007 13:06:01.887707  783699 command_runner.go:130] > # the cgroup blockio controller.
	I1007 13:06:01.887713  783699 command_runner.go:130] > # blockio_config_file = ""
	I1007 13:06:01.887720  783699 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1007 13:06:01.887726  783699 command_runner.go:130] > # blockio parameters.
	I1007 13:06:01.887730  783699 command_runner.go:130] > # blockio_reload = false
	I1007 13:06:01.887738  783699 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1007 13:06:01.887744  783699 command_runner.go:130] > # irqbalance daemon.
	I1007 13:06:01.887750  783699 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1007 13:06:01.887759  783699 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1007 13:06:01.887767  783699 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1007 13:06:01.887774  783699 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1007 13:06:01.887781  783699 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1007 13:06:01.887787  783699 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1007 13:06:01.887794  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.887805  783699 command_runner.go:130] > # rdt_config_file = ""
	I1007 13:06:01.887812  783699 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1007 13:06:01.887816  783699 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1007 13:06:01.887836  783699 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1007 13:06:01.887842  783699 command_runner.go:130] > # separate_pull_cgroup = ""
	I1007 13:06:01.887848  783699 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1007 13:06:01.887857  783699 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1007 13:06:01.887863  783699 command_runner.go:130] > # will be added.
	I1007 13:06:01.887867  783699 command_runner.go:130] > # default_capabilities = [
	I1007 13:06:01.887873  783699 command_runner.go:130] > # 	"CHOWN",
	I1007 13:06:01.887877  783699 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1007 13:06:01.887883  783699 command_runner.go:130] > # 	"FSETID",
	I1007 13:06:01.887887  783699 command_runner.go:130] > # 	"FOWNER",
	I1007 13:06:01.887892  783699 command_runner.go:130] > # 	"SETGID",
	I1007 13:06:01.887896  783699 command_runner.go:130] > # 	"SETUID",
	I1007 13:06:01.887901  783699 command_runner.go:130] > # 	"SETPCAP",
	I1007 13:06:01.887907  783699 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1007 13:06:01.887911  783699 command_runner.go:130] > # 	"KILL",
	I1007 13:06:01.887917  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887925  783699 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1007 13:06:01.887933  783699 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1007 13:06:01.887943  783699 command_runner.go:130] > # add_inheritable_capabilities = false
	I1007 13:06:01.887950  783699 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1007 13:06:01.887955  783699 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 13:06:01.887962  783699 command_runner.go:130] > default_sysctls = [
	I1007 13:06:01.887966  783699 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1007 13:06:01.887970  783699 command_runner.go:130] > ]
	I1007 13:06:01.887975  783699 command_runner.go:130] > # List of devices on the host that a
	I1007 13:06:01.887981  783699 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1007 13:06:01.887985  783699 command_runner.go:130] > # allowed_devices = [
	I1007 13:06:01.887989  783699 command_runner.go:130] > # 	"/dev/fuse",
	I1007 13:06:01.887992  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887998  783699 command_runner.go:130] > # List of additional devices. specified as
	I1007 13:06:01.888007  783699 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1007 13:06:01.888012  783699 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1007 13:06:01.888017  783699 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 13:06:01.888021  783699 command_runner.go:130] > # additional_devices = [
	I1007 13:06:01.888027  783699 command_runner.go:130] > # ]
	I1007 13:06:01.888032  783699 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1007 13:06:01.888038  783699 command_runner.go:130] > # cdi_spec_dirs = [
	I1007 13:06:01.888042  783699 command_runner.go:130] > # 	"/etc/cdi",
	I1007 13:06:01.888047  783699 command_runner.go:130] > # 	"/var/run/cdi",
	I1007 13:06:01.888054  783699 command_runner.go:130] > # ]
	I1007 13:06:01.888062  783699 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1007 13:06:01.888069  783699 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1007 13:06:01.888075  783699 command_runner.go:130] > # Defaults to false.
	I1007 13:06:01.888081  783699 command_runner.go:130] > # device_ownership_from_security_context = false
	I1007 13:06:01.888089  783699 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1007 13:06:01.888097  783699 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1007 13:06:01.888103  783699 command_runner.go:130] > # hooks_dir = [
	I1007 13:06:01.888108  783699 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1007 13:06:01.888113  783699 command_runner.go:130] > # ]
	I1007 13:06:01.888119  783699 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1007 13:06:01.888127  783699 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1007 13:06:01.888133  783699 command_runner.go:130] > # its default mounts from the following two files:
	I1007 13:06:01.888139  783699 command_runner.go:130] > #
	I1007 13:06:01.888145  783699 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1007 13:06:01.888154  783699 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1007 13:06:01.888160  783699 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1007 13:06:01.888165  783699 command_runner.go:130] > #
	I1007 13:06:01.888171  783699 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1007 13:06:01.888180  783699 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1007 13:06:01.888188  783699 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1007 13:06:01.888199  783699 command_runner.go:130] > #      only add mounts it finds in this file.
	I1007 13:06:01.888204  783699 command_runner.go:130] > #
	I1007 13:06:01.888209  783699 command_runner.go:130] > # default_mounts_file = ""
	I1007 13:06:01.888217  783699 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1007 13:06:01.888226  783699 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1007 13:06:01.888230  783699 command_runner.go:130] > pids_limit = 1024
	I1007 13:06:01.888237  783699 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1007 13:06:01.888246  783699 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1007 13:06:01.888252  783699 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1007 13:06:01.888263  783699 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1007 13:06:01.888271  783699 command_runner.go:130] > # log_size_max = -1
	I1007 13:06:01.888282  783699 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1007 13:06:01.888291  783699 command_runner.go:130] > # log_to_journald = false
	I1007 13:06:01.888303  783699 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1007 13:06:01.888314  783699 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1007 13:06:01.888325  783699 command_runner.go:130] > # Path to directory for container attach sockets.
	I1007 13:06:01.888335  783699 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1007 13:06:01.888346  783699 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1007 13:06:01.888355  783699 command_runner.go:130] > # bind_mount_prefix = ""
	I1007 13:06:01.888367  783699 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1007 13:06:01.888376  783699 command_runner.go:130] > # read_only = false
	I1007 13:06:01.888388  783699 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1007 13:06:01.888400  783699 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1007 13:06:01.888409  783699 command_runner.go:130] > # live configuration reload.
	I1007 13:06:01.888416  783699 command_runner.go:130] > # log_level = "info"
	I1007 13:06:01.888427  783699 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1007 13:06:01.888438  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.888449  783699 command_runner.go:130] > # log_filter = ""
	I1007 13:06:01.888461  783699 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1007 13:06:01.888478  783699 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1007 13:06:01.888488  783699 command_runner.go:130] > # separated by comma.
	I1007 13:06:01.888500  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888510  783699 command_runner.go:130] > # uid_mappings = ""
	I1007 13:06:01.888522  783699 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1007 13:06:01.888535  783699 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1007 13:06:01.888544  783699 command_runner.go:130] > # separated by comma.
	I1007 13:06:01.888559  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888571  783699 command_runner.go:130] > # gid_mappings = ""
	I1007 13:06:01.888584  783699 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1007 13:06:01.888596  783699 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 13:06:01.888608  783699 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 13:06:01.888622  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888633  783699 command_runner.go:130] > # minimum_mappable_uid = -1
	I1007 13:06:01.888644  783699 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1007 13:06:01.888658  783699 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 13:06:01.888672  783699 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 13:06:01.888686  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888696  783699 command_runner.go:130] > # minimum_mappable_gid = -1
	I1007 13:06:01.888708  783699 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1007 13:06:01.888721  783699 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1007 13:06:01.888733  783699 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1007 13:06:01.888742  783699 command_runner.go:130] > # ctr_stop_timeout = 30
	I1007 13:06:01.888751  783699 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1007 13:06:01.888763  783699 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1007 13:06:01.888773  783699 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1007 13:06:01.888784  783699 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1007 13:06:01.888793  783699 command_runner.go:130] > drop_infra_ctr = false
	I1007 13:06:01.888810  783699 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1007 13:06:01.888822  783699 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1007 13:06:01.888835  783699 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1007 13:06:01.888846  783699 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1007 13:06:01.888859  783699 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1007 13:06:01.888872  783699 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1007 13:06:01.888884  783699 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1007 13:06:01.888896  783699 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1007 13:06:01.888904  783699 command_runner.go:130] > # shared_cpuset = ""
	I1007 13:06:01.888916  783699 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1007 13:06:01.888927  783699 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1007 13:06:01.888937  783699 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1007 13:06:01.888951  783699 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1007 13:06:01.888960  783699 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1007 13:06:01.888969  783699 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1007 13:06:01.888984  783699 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1007 13:06:01.888994  783699 command_runner.go:130] > # enable_criu_support = false
	I1007 13:06:01.889002  783699 command_runner.go:130] > # Enable/disable the generation of the container,
	I1007 13:06:01.889014  783699 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1007 13:06:01.889021  783699 command_runner.go:130] > # enable_pod_events = false
	I1007 13:06:01.889034  783699 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 13:06:01.889048  783699 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 13:06:01.889060  783699 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1007 13:06:01.889070  783699 command_runner.go:130] > # default_runtime = "runc"
	I1007 13:06:01.889081  783699 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1007 13:06:01.889093  783699 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1007 13:06:01.889110  783699 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1007 13:06:01.889121  783699 command_runner.go:130] > # creation as a file is not desired either.
	I1007 13:06:01.889136  783699 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1007 13:06:01.889149  783699 command_runner.go:130] > # the hostname is being managed dynamically.
	I1007 13:06:01.889158  783699 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1007 13:06:01.889166  783699 command_runner.go:130] > # ]
	I1007 13:06:01.889177  783699 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1007 13:06:01.889189  783699 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1007 13:06:01.889202  783699 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1007 13:06:01.889212  783699 command_runner.go:130] > # Each entry in the table should follow the format:
	I1007 13:06:01.889220  783699 command_runner.go:130] > #
	I1007 13:06:01.889227  783699 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1007 13:06:01.889237  783699 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1007 13:06:01.889263  783699 command_runner.go:130] > # runtime_type = "oci"
	I1007 13:06:01.889273  783699 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1007 13:06:01.889277  783699 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1007 13:06:01.889284  783699 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1007 13:06:01.889288  783699 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1007 13:06:01.889294  783699 command_runner.go:130] > # monitor_env = []
	I1007 13:06:01.889299  783699 command_runner.go:130] > # privileged_without_host_devices = false
	I1007 13:06:01.889305  783699 command_runner.go:130] > # allowed_annotations = []
	I1007 13:06:01.889310  783699 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1007 13:06:01.889315  783699 command_runner.go:130] > # Where:
	I1007 13:06:01.889320  783699 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1007 13:06:01.889328  783699 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1007 13:06:01.889336  783699 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1007 13:06:01.889344  783699 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1007 13:06:01.889353  783699 command_runner.go:130] > #   in $PATH.
	I1007 13:06:01.889361  783699 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1007 13:06:01.889366  783699 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1007 13:06:01.889374  783699 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1007 13:06:01.889381  783699 command_runner.go:130] > #   state.
	I1007 13:06:01.889387  783699 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1007 13:06:01.889395  783699 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1007 13:06:01.889402  783699 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1007 13:06:01.889407  783699 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1007 13:06:01.889415  783699 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1007 13:06:01.889424  783699 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1007 13:06:01.889429  783699 command_runner.go:130] > #   The currently recognized values are:
	I1007 13:06:01.889437  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1007 13:06:01.889447  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1007 13:06:01.889453  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1007 13:06:01.889461  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1007 13:06:01.889468  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1007 13:06:01.889476  783699 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1007 13:06:01.889485  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1007 13:06:01.889493  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1007 13:06:01.889501  783699 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1007 13:06:01.889509  783699 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1007 13:06:01.889515  783699 command_runner.go:130] > #   deprecated option "conmon".
	I1007 13:06:01.889522  783699 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1007 13:06:01.889529  783699 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1007 13:06:01.889535  783699 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1007 13:06:01.889542  783699 command_runner.go:130] > #   should be moved to the container's cgroup
	I1007 13:06:01.889549  783699 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1007 13:06:01.889555  783699 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1007 13:06:01.889562  783699 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1007 13:06:01.889569  783699 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1007 13:06:01.889572  783699 command_runner.go:130] > #
	I1007 13:06:01.889577  783699 command_runner.go:130] > # Using the seccomp notifier feature:
	I1007 13:06:01.889585  783699 command_runner.go:130] > #
	I1007 13:06:01.889594  783699 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1007 13:06:01.889600  783699 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1007 13:06:01.889606  783699 command_runner.go:130] > #
	I1007 13:06:01.889611  783699 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1007 13:06:01.889619  783699 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1007 13:06:01.889622  783699 command_runner.go:130] > #
	I1007 13:06:01.889630  783699 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1007 13:06:01.889636  783699 command_runner.go:130] > # feature.
	I1007 13:06:01.889639  783699 command_runner.go:130] > #
	I1007 13:06:01.889646  783699 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1007 13:06:01.889654  783699 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1007 13:06:01.889660  783699 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1007 13:06:01.889668  783699 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1007 13:06:01.889676  783699 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1007 13:06:01.889679  783699 command_runner.go:130] > #
	I1007 13:06:01.889687  783699 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1007 13:06:01.889695  783699 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1007 13:06:01.889699  783699 command_runner.go:130] > #
	I1007 13:06:01.889705  783699 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1007 13:06:01.889713  783699 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1007 13:06:01.889718  783699 command_runner.go:130] > #
	I1007 13:06:01.889724  783699 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1007 13:06:01.889731  783699 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1007 13:06:01.889737  783699 command_runner.go:130] > # limitation.
	I1007 13:06:01.889744  783699 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1007 13:06:01.889750  783699 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1007 13:06:01.889754  783699 command_runner.go:130] > runtime_type = "oci"
	I1007 13:06:01.889760  783699 command_runner.go:130] > runtime_root = "/run/runc"
	I1007 13:06:01.889764  783699 command_runner.go:130] > runtime_config_path = ""
	I1007 13:06:01.889772  783699 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1007 13:06:01.889776  783699 command_runner.go:130] > monitor_cgroup = "pod"
	I1007 13:06:01.889783  783699 command_runner.go:130] > monitor_exec_cgroup = ""
	I1007 13:06:01.889786  783699 command_runner.go:130] > monitor_env = [
	I1007 13:06:01.889792  783699 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 13:06:01.889797  783699 command_runner.go:130] > ]
	I1007 13:06:01.889807  783699 command_runner.go:130] > privileged_without_host_devices = false
	I1007 13:06:01.889813  783699 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1007 13:06:01.889821  783699 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1007 13:06:01.889829  783699 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1007 13:06:01.889838  783699 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1007 13:06:01.889850  783699 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1007 13:06:01.889858  783699 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1007 13:06:01.889868  783699 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1007 13:06:01.889878  783699 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1007 13:06:01.889884  783699 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1007 13:06:01.889893  783699 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1007 13:06:01.889899  783699 command_runner.go:130] > # Example:
	I1007 13:06:01.889903  783699 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1007 13:06:01.889911  783699 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1007 13:06:01.889915  783699 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1007 13:06:01.889922  783699 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1007 13:06:01.889926  783699 command_runner.go:130] > # cpuset = 0
	I1007 13:06:01.889932  783699 command_runner.go:130] > # cpushares = "0-1"
	I1007 13:06:01.889935  783699 command_runner.go:130] > # Where:
	I1007 13:06:01.889942  783699 command_runner.go:130] > # The workload name is workload-type.
	I1007 13:06:01.889948  783699 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1007 13:06:01.889956  783699 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1007 13:06:01.889961  783699 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1007 13:06:01.889968  783699 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1007 13:06:01.889976  783699 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1007 13:06:01.889980  783699 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1007 13:06:01.889989  783699 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1007 13:06:01.889993  783699 command_runner.go:130] > # Default value is set to true
	I1007 13:06:01.889999  783699 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1007 13:06:01.890004  783699 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1007 13:06:01.890011  783699 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1007 13:06:01.890020  783699 command_runner.go:130] > # Default value is set to 'false'
	I1007 13:06:01.890035  783699 command_runner.go:130] > # disable_hostport_mapping = false
	I1007 13:06:01.890042  783699 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1007 13:06:01.890046  783699 command_runner.go:130] > #
	I1007 13:06:01.890051  783699 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1007 13:06:01.890057  783699 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1007 13:06:01.890062  783699 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1007 13:06:01.890068  783699 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1007 13:06:01.890076  783699 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1007 13:06:01.890080  783699 command_runner.go:130] > [crio.image]
	I1007 13:06:01.890085  783699 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1007 13:06:01.890089  783699 command_runner.go:130] > # default_transport = "docker://"
	I1007 13:06:01.890095  783699 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1007 13:06:01.890100  783699 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1007 13:06:01.890104  783699 command_runner.go:130] > # global_auth_file = ""
	I1007 13:06:01.890109  783699 command_runner.go:130] > # The image used to instantiate infra containers.
	I1007 13:06:01.890113  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.890118  783699 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1007 13:06:01.890123  783699 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1007 13:06:01.890128  783699 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1007 13:06:01.890133  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.890137  783699 command_runner.go:130] > # pause_image_auth_file = ""
	I1007 13:06:01.890142  783699 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1007 13:06:01.890147  783699 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1007 13:06:01.890153  783699 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1007 13:06:01.890158  783699 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1007 13:06:01.890162  783699 command_runner.go:130] > # pause_command = "/pause"
	I1007 13:06:01.890169  783699 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1007 13:06:01.890174  783699 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1007 13:06:01.890183  783699 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1007 13:06:01.890191  783699 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1007 13:06:01.890197  783699 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1007 13:06:01.890202  783699 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1007 13:06:01.890207  783699 command_runner.go:130] > # pinned_images = [
	I1007 13:06:01.890210  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890215  783699 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1007 13:06:01.890222  783699 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1007 13:06:01.890227  783699 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1007 13:06:01.890236  783699 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1007 13:06:01.890241  783699 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1007 13:06:01.890247  783699 command_runner.go:130] > # signature_policy = ""
	I1007 13:06:01.890252  783699 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1007 13:06:01.890260  783699 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1007 13:06:01.890267  783699 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1007 13:06:01.890279  783699 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1007 13:06:01.890287  783699 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1007 13:06:01.890294  783699 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1007 13:06:01.890300  783699 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1007 13:06:01.890308  783699 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1007 13:06:01.890315  783699 command_runner.go:130] > # changing them here.
	I1007 13:06:01.890319  783699 command_runner.go:130] > # insecure_registries = [
	I1007 13:06:01.890324  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890331  783699 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1007 13:06:01.890338  783699 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1007 13:06:01.890342  783699 command_runner.go:130] > # image_volumes = "mkdir"
	I1007 13:06:01.890347  783699 command_runner.go:130] > # Temporary directory to use for storing big files
	I1007 13:06:01.890355  783699 command_runner.go:130] > # big_files_temporary_dir = ""
	I1007 13:06:01.890360  783699 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1007 13:06:01.890367  783699 command_runner.go:130] > # CNI plugins.
	I1007 13:06:01.890371  783699 command_runner.go:130] > [crio.network]
	I1007 13:06:01.890379  783699 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1007 13:06:01.890384  783699 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1007 13:06:01.890390  783699 command_runner.go:130] > # cni_default_network = ""
	I1007 13:06:01.890396  783699 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1007 13:06:01.890402  783699 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1007 13:06:01.890408  783699 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1007 13:06:01.890415  783699 command_runner.go:130] > # plugin_dirs = [
	I1007 13:06:01.890419  783699 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1007 13:06:01.890425  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890431  783699 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1007 13:06:01.890436  783699 command_runner.go:130] > [crio.metrics]
	I1007 13:06:01.890441  783699 command_runner.go:130] > # Globally enable or disable metrics support.
	I1007 13:06:01.890445  783699 command_runner.go:130] > enable_metrics = true
	I1007 13:06:01.890451  783699 command_runner.go:130] > # Specify enabled metrics collectors.
	I1007 13:06:01.890456  783699 command_runner.go:130] > # Per default all metrics are enabled.
	I1007 13:06:01.890465  783699 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1007 13:06:01.890472  783699 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1007 13:06:01.890479  783699 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1007 13:06:01.890484  783699 command_runner.go:130] > # metrics_collectors = [
	I1007 13:06:01.890490  783699 command_runner.go:130] > # 	"operations",
	I1007 13:06:01.890494  783699 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1007 13:06:01.890501  783699 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1007 13:06:01.890505  783699 command_runner.go:130] > # 	"operations_errors",
	I1007 13:06:01.890510  783699 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1007 13:06:01.890514  783699 command_runner.go:130] > # 	"image_pulls_by_name",
	I1007 13:06:01.890520  783699 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1007 13:06:01.890527  783699 command_runner.go:130] > # 	"image_pulls_failures",
	I1007 13:06:01.890533  783699 command_runner.go:130] > # 	"image_pulls_successes",
	I1007 13:06:01.890538  783699 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1007 13:06:01.890544  783699 command_runner.go:130] > # 	"image_layer_reuse",
	I1007 13:06:01.890548  783699 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1007 13:06:01.890554  783699 command_runner.go:130] > # 	"containers_oom_total",
	I1007 13:06:01.890559  783699 command_runner.go:130] > # 	"containers_oom",
	I1007 13:06:01.890565  783699 command_runner.go:130] > # 	"processes_defunct",
	I1007 13:06:01.890569  783699 command_runner.go:130] > # 	"operations_total",
	I1007 13:06:01.890573  783699 command_runner.go:130] > # 	"operations_latency_seconds",
	I1007 13:06:01.890578  783699 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1007 13:06:01.890584  783699 command_runner.go:130] > # 	"operations_errors_total",
	I1007 13:06:01.890589  783699 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1007 13:06:01.890597  783699 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1007 13:06:01.890601  783699 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1007 13:06:01.890607  783699 command_runner.go:130] > # 	"image_pulls_success_total",
	I1007 13:06:01.890611  783699 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1007 13:06:01.890615  783699 command_runner.go:130] > # 	"containers_oom_count_total",
	I1007 13:06:01.890622  783699 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1007 13:06:01.890627  783699 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1007 13:06:01.890632  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890637  783699 command_runner.go:130] > # The port on which the metrics server will listen.
	I1007 13:06:01.890643  783699 command_runner.go:130] > # metrics_port = 9090
	I1007 13:06:01.890648  783699 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1007 13:06:01.890654  783699 command_runner.go:130] > # metrics_socket = ""
	I1007 13:06:01.890659  783699 command_runner.go:130] > # The certificate for the secure metrics server.
	I1007 13:06:01.890667  783699 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1007 13:06:01.890674  783699 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1007 13:06:01.890681  783699 command_runner.go:130] > # certificate on any modification event.
	I1007 13:06:01.890685  783699 command_runner.go:130] > # metrics_cert = ""
	I1007 13:06:01.890691  783699 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1007 13:06:01.890696  783699 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1007 13:06:01.890703  783699 command_runner.go:130] > # metrics_key = ""
	I1007 13:06:01.890709  783699 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1007 13:06:01.890715  783699 command_runner.go:130] > [crio.tracing]
	I1007 13:06:01.890721  783699 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1007 13:06:01.890727  783699 command_runner.go:130] > # enable_tracing = false
	I1007 13:06:01.890732  783699 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1007 13:06:01.890739  783699 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1007 13:06:01.890746  783699 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1007 13:06:01.890752  783699 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1007 13:06:01.890756  783699 command_runner.go:130] > # CRI-O NRI configuration.
	I1007 13:06:01.890762  783699 command_runner.go:130] > [crio.nri]
	I1007 13:06:01.890767  783699 command_runner.go:130] > # Globally enable or disable NRI.
	I1007 13:06:01.890770  783699 command_runner.go:130] > # enable_nri = false
	I1007 13:06:01.890780  783699 command_runner.go:130] > # NRI socket to listen on.
	I1007 13:06:01.890788  783699 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1007 13:06:01.890793  783699 command_runner.go:130] > # NRI plugin directory to use.
	I1007 13:06:01.890797  783699 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1007 13:06:01.890808  783699 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1007 13:06:01.890812  783699 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1007 13:06:01.890820  783699 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1007 13:06:01.890824  783699 command_runner.go:130] > # nri_disable_connections = false
	I1007 13:06:01.890831  783699 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1007 13:06:01.890835  783699 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1007 13:06:01.890843  783699 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1007 13:06:01.890847  783699 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1007 13:06:01.890853  783699 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1007 13:06:01.890859  783699 command_runner.go:130] > [crio.stats]
	I1007 13:06:01.890865  783699 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1007 13:06:01.890872  783699 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1007 13:06:01.890876  783699 command_runner.go:130] > # stats_collection_period = 0
	I1007 13:06:01.890965  783699 cni.go:84] Creating CNI manager for ""
	I1007 13:06:01.890980  783699 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 13:06:01.891001  783699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:06:01.891025  783699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-723069 NodeName:multinode-723069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:06:01.891175  783699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-723069"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:06:01.891246  783699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:06:01.902940  783699 command_runner.go:130] > kubeadm
	I1007 13:06:01.902963  783699 command_runner.go:130] > kubectl
	I1007 13:06:01.902968  783699 command_runner.go:130] > kubelet
	I1007 13:06:01.902989  783699 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:06:01.903045  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:06:01.914351  783699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1007 13:06:01.933383  783699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:06:01.951954  783699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1007 13:06:01.970659  783699 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I1007 13:06:01.975034  783699 command_runner.go:130] > 192.168.39.213	control-plane.minikube.internal
	I1007 13:06:01.975123  783699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:02.121651  783699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:06:02.137295  783699 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069 for IP: 192.168.39.213
	I1007 13:06:02.137322  783699 certs.go:194] generating shared ca certs ...
	I1007 13:06:02.137344  783699 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:02.137544  783699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:06:02.137591  783699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:06:02.137605  783699 certs.go:256] generating profile certs ...
	I1007 13:06:02.137756  783699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/client.key
	I1007 13:06:02.137847  783699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.key.ae866860
	I1007 13:06:02.137905  783699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.key
	I1007 13:06:02.137922  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 13:06:02.137944  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 13:06:02.137962  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 13:06:02.137980  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 13:06:02.137999  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 13:06:02.138019  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 13:06:02.138052  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 13:06:02.138071  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 13:06:02.138138  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:06:02.138182  783699 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:06:02.138195  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:06:02.138233  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:06:02.138265  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:06:02.138291  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:06:02.138345  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:06:02.138380  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.138400  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.138422  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.139249  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:06:02.165662  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:06:02.191751  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:06:02.219139  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:06:02.244926  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 13:06:02.269854  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:06:02.296107  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:06:02.330607  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:06:02.355127  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:06:02.379833  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:06:02.404853  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:06:02.431617  783699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:06:02.449998  783699 ssh_runner.go:195] Run: openssl version
	I1007 13:06:02.456466  783699 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1007 13:06:02.456561  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:06:02.468804  783699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.473481  783699 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.473526  783699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.473634  783699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.480836  783699 command_runner.go:130] > 51391683
	I1007 13:06:02.480987  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:06:02.491577  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:06:02.504102  783699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.508838  783699 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.508879  783699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.508925  783699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.514965  783699 command_runner.go:130] > 3ec20f2e
	I1007 13:06:02.515039  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:06:02.525449  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:06:02.538044  783699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.542787  783699 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.542825  783699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.542874  783699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.548589  783699 command_runner.go:130] > b5213941
	I1007 13:06:02.548681  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:06:02.558781  783699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:06:02.563432  783699 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:06:02.563468  783699 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1007 13:06:02.563478  783699 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I1007 13:06:02.563487  783699 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 13:06:02.563496  783699 command_runner.go:130] > Access: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563503  783699 command_runner.go:130] > Modify: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563514  783699 command_runner.go:130] > Change: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563522  783699 command_runner.go:130] >  Birth: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563588  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:06:02.569597  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.569695  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:06:02.575468  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.575546  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:06:02.581231  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.581473  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:06:02.587188  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.587283  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:06:02.593152  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.593241  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:06:02.599060  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.599136  783699 kubeadm.go:392] StartCluster: {Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:06:02.599335  783699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:06:02.599399  783699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:06:02.635408  783699 command_runner.go:130] > 1b16ed582a3d903619a49d7a66ce8b1592c1282c8540cb0ad83c09b5f573e961
	I1007 13:06:02.635434  783699 command_runner.go:130] > a8f48750e5f44b1a9fe2de0f7ed356eb3f84295bd35ae219bd06360d48637746
	I1007 13:06:02.635440  783699 command_runner.go:130] > de662ec335094b1ce5b3decd4cbb85684fdb292c298d0b826103ec7b98d3c353
	I1007 13:06:02.635448  783699 command_runner.go:130] > e9481b7d15376c901357d20afc1775124be74711ae330834dc42c6d2af46217d
	I1007 13:06:02.635454  783699 command_runner.go:130] > 37eeaddff114d4a2699ef3faf7b619845f39755af2918424877c3367a13704c8
	I1007 13:06:02.635459  783699 command_runner.go:130] > 635171b305b41ba120abf85e1896029801be0ab931c67478482a8c57cee642f3
	I1007 13:06:02.635464  783699 command_runner.go:130] > 9f10303baf5bb3d0d6d815be9ccda86d5200b25e7bd0b377a537e845b4076093
	I1007 13:06:02.635472  783699 command_runner.go:130] > fc7b1afeb1b640dc162c7830189d8f6c1133dba31223b21ceb639ceabb3636e9
	I1007 13:06:02.636772  783699 cri.go:89] found id: "1b16ed582a3d903619a49d7a66ce8b1592c1282c8540cb0ad83c09b5f573e961"
	I1007 13:06:02.636789  783699 cri.go:89] found id: "a8f48750e5f44b1a9fe2de0f7ed356eb3f84295bd35ae219bd06360d48637746"
	I1007 13:06:02.636793  783699 cri.go:89] found id: "de662ec335094b1ce5b3decd4cbb85684fdb292c298d0b826103ec7b98d3c353"
	I1007 13:06:02.636796  783699 cri.go:89] found id: "e9481b7d15376c901357d20afc1775124be74711ae330834dc42c6d2af46217d"
	I1007 13:06:02.636800  783699 cri.go:89] found id: "37eeaddff114d4a2699ef3faf7b619845f39755af2918424877c3367a13704c8"
	I1007 13:06:02.636804  783699 cri.go:89] found id: "635171b305b41ba120abf85e1896029801be0ab931c67478482a8c57cee642f3"
	I1007 13:06:02.636807  783699 cri.go:89] found id: "9f10303baf5bb3d0d6d815be9ccda86d5200b25e7bd0b377a537e845b4076093"
	I1007 13:06:02.636809  783699 cri.go:89] found id: "fc7b1afeb1b640dc162c7830189d8f6c1133dba31223b21ceb639ceabb3636e9"
	I1007 13:06:02.636811  783699 cri.go:89] found id: ""
	I1007 13:06:02.636867  783699 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-723069 -n multinode-723069
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-723069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (319.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 stop
E1007 13:07:56.517585  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-723069 stop: exit status 82 (2m0.5046237s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-723069-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-723069 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status
E1007 13:09:53.450501  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 status: (18.813520225s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr: (3.391532472s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-723069 -n multinode-723069
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 logs -n 25: (2.141957177s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069:/home/docker/cp-test_multinode-723069-m02_multinode-723069.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069 sudo cat                                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m02_multinode-723069.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03:/home/docker/cp-test_multinode-723069-m02_multinode-723069-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069-m03 sudo cat                                   | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m02_multinode-723069-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp testdata/cp-test.txt                                                | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2261398320/001/cp-test_multinode-723069-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069:/home/docker/cp-test_multinode-723069-m03_multinode-723069.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069 sudo cat                                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m03_multinode-723069.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02:/home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069-m02 sudo cat                                   | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-723069 node stop m03                                                          | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	| node    | multinode-723069 node start                                                             | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	| stop    | -p multinode-723069                                                                     | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	| start   | -p multinode-723069                                                                     | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:04 UTC | 07 Oct 24 13:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC |                     |
	| node    | multinode-723069 node delete                                                            | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC | 07 Oct 24 13:07 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-723069 stop                                                                   | multinode-723069 | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:04:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:04:28.054395  783699 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:04:28.054537  783699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:04:28.054545  783699 out.go:358] Setting ErrFile to fd 2...
	I1007 13:04:28.054550  783699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:04:28.054719  783699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:04:28.055304  783699 out.go:352] Setting JSON to false
	I1007 13:04:28.056283  783699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10017,"bootTime":1728296251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:04:28.056433  783699 start.go:139] virtualization: kvm guest
	I1007 13:04:28.058923  783699 out.go:177] * [multinode-723069] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:04:28.060685  783699 notify.go:220] Checking for updates...
	I1007 13:04:28.060746  783699 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:04:28.062744  783699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:04:28.064378  783699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:04:28.065747  783699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:04:28.067098  783699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:04:28.068411  783699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:04:28.070442  783699 config.go:182] Loaded profile config "multinode-723069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:28.070632  783699 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:04:28.071408  783699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:04:28.071511  783699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:04:28.087822  783699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1007 13:04:28.088381  783699 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:04:28.089091  783699 main.go:141] libmachine: Using API Version  1
	I1007 13:04:28.089123  783699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:04:28.089606  783699 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:04:28.089854  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:04:28.126566  783699 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:04:28.127855  783699 start.go:297] selected driver: kvm2
	I1007 13:04:28.127875  783699 start.go:901] validating driver "kvm2" against &{Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:04:28.128046  783699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:04:28.128374  783699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:04:28.128493  783699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:04:28.144272  783699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:04:28.144968  783699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:04:28.145004  783699 cni.go:84] Creating CNI manager for ""
	I1007 13:04:28.145076  783699 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 13:04:28.145131  783699 start.go:340] cluster config:
	{Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-723069 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubefl
ow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:04:28.145259  783699 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:04:28.147227  783699 out.go:177] * Starting "multinode-723069" primary control-plane node in "multinode-723069" cluster
	I1007 13:04:28.148415  783699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:04:28.148474  783699 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:04:28.148487  783699 cache.go:56] Caching tarball of preloaded images
	I1007 13:04:28.148576  783699 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:04:28.148589  783699 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:04:28.148754  783699 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/config.json ...
	I1007 13:04:28.149005  783699 start.go:360] acquireMachinesLock for multinode-723069: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:04:28.149060  783699 start.go:364] duration metric: took 30.127µs to acquireMachinesLock for "multinode-723069"
	I1007 13:04:28.149082  783699 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:04:28.149092  783699 fix.go:54] fixHost starting: 
	I1007 13:04:28.149386  783699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:04:28.149419  783699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:04:28.164793  783699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1007 13:04:28.165339  783699 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:04:28.165927  783699 main.go:141] libmachine: Using API Version  1
	I1007 13:04:28.165955  783699 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:04:28.166310  783699 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:04:28.166500  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:04:28.166639  783699 main.go:141] libmachine: (multinode-723069) Calling .GetState
	I1007 13:04:28.168205  783699 fix.go:112] recreateIfNeeded on multinode-723069: state=Running err=<nil>
	W1007 13:04:28.168234  783699 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:04:28.170422  783699 out.go:177] * Updating the running kvm2 "multinode-723069" VM ...
	I1007 13:04:28.172040  783699 machine.go:93] provisionDockerMachine start ...
	I1007 13:04:28.172089  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:04:28.172430  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.175256  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.175708  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.175744  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.175991  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.176211  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.176369  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.176522  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.176718  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.176980  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.176996  783699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:04:28.287771  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-723069
	
	I1007 13:04:28.287814  783699 main.go:141] libmachine: (multinode-723069) Calling .GetMachineName
	I1007 13:04:28.288104  783699 buildroot.go:166] provisioning hostname "multinode-723069"
	I1007 13:04:28.288139  783699 main.go:141] libmachine: (multinode-723069) Calling .GetMachineName
	I1007 13:04:28.288356  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.292009  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.292574  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.292609  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.292871  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.293127  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.293384  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.293621  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.293824  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.294017  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.294059  783699 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-723069 && echo "multinode-723069" | sudo tee /etc/hostname
	I1007 13:04:28.418976  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-723069
	
	I1007 13:04:28.419010  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.422176  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.422536  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.422577  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.422739  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.422949  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.423122  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.423240  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.423402  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.423594  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.423610  783699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-723069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-723069/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-723069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:04:28.541984  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:04:28.542149  783699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:04:28.542191  783699 buildroot.go:174] setting up certificates
	I1007 13:04:28.542202  783699 provision.go:84] configureAuth start
	I1007 13:04:28.542229  783699 main.go:141] libmachine: (multinode-723069) Calling .GetMachineName
	I1007 13:04:28.542545  783699 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:04:28.545089  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.545550  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.545579  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.545700  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.548731  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.549138  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.549189  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.549369  783699 provision.go:143] copyHostCerts
	I1007 13:04:28.549405  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:04:28.549454  783699 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:04:28.549473  783699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:04:28.549559  783699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:04:28.549673  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:04:28.549719  783699 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:04:28.549729  783699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:04:28.549766  783699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:04:28.549845  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:04:28.549867  783699 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:04:28.549889  783699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:04:28.549926  783699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:04:28.550011  783699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.multinode-723069 san=[127.0.0.1 192.168.39.213 localhost minikube multinode-723069]
	I1007 13:04:28.730114  783699 provision.go:177] copyRemoteCerts
	I1007 13:04:28.730180  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:04:28.730211  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.733076  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.733472  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.733501  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.733735  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.733983  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.734204  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.734364  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:04:28.820893  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 13:04:28.820974  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:04:28.848792  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 13:04:28.848871  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1007 13:04:28.875218  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 13:04:28.875293  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:04:28.906647  783699 provision.go:87] duration metric: took 364.424362ms to configureAuth
	I1007 13:04:28.906681  783699 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:04:28.906949  783699 config.go:182] Loaded profile config "multinode-723069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:28.907045  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:04:28.910284  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.910727  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:04:28.910774  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:04:28.911005  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:04:28.911239  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.911417  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:04:28.911651  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:04:28.911868  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:28.912102  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:04:28.912121  783699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:05:59.782562  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:05:59.782602  783699 machine.go:96] duration metric: took 1m31.610527251s to provisionDockerMachine
	I1007 13:05:59.782621  783699 start.go:293] postStartSetup for "multinode-723069" (driver="kvm2")
	I1007 13:05:59.782633  783699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:05:59.782657  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:05:59.783010  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:05:59.783058  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:05:59.786312  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.786727  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:05:59.786750  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.786989  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:05:59.787190  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:05:59.787409  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:05:59.787570  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:05:59.878722  783699 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:05:59.883606  783699 command_runner.go:130] > NAME=Buildroot
	I1007 13:05:59.883640  783699 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1007 13:05:59.883644  783699 command_runner.go:130] > ID=buildroot
	I1007 13:05:59.883650  783699 command_runner.go:130] > VERSION_ID=2023.02.9
	I1007 13:05:59.883657  783699 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1007 13:05:59.883710  783699 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:05:59.883728  783699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:05:59.883810  783699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:05:59.883901  783699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:05:59.883917  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /etc/ssl/certs/7543242.pem
	I1007 13:05:59.884032  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:05:59.894766  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:05:59.922063  783699 start.go:296] duration metric: took 139.39458ms for postStartSetup
	I1007 13:05:59.922116  783699 fix.go:56] duration metric: took 1m31.77302452s for fixHost
	I1007 13:05:59.922149  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:05:59.924790  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.925211  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:05:59.925240  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:05:59.925407  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:05:59.925593  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:05:59.925768  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:05:59.925884  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:05:59.926018  783699 main.go:141] libmachine: Using SSH client type: native
	I1007 13:05:59.926235  783699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 13:05:59.926249  783699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:06:00.039492  783699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728306360.019052140
	
	I1007 13:06:00.039518  783699 fix.go:216] guest clock: 1728306360.019052140
	I1007 13:06:00.039528  783699 fix.go:229] Guest: 2024-10-07 13:06:00.01905214 +0000 UTC Remote: 2024-10-07 13:05:59.922121693 +0000 UTC m=+91.912039561 (delta=96.930447ms)
	I1007 13:06:00.039582  783699 fix.go:200] guest clock delta is within tolerance: 96.930447ms
	I1007 13:06:00.039591  783699 start.go:83] releasing machines lock for "multinode-723069", held for 1m31.890517559s
	I1007 13:06:00.039621  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.039900  783699 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:06:00.042532  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.042914  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:00.042943  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.043169  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.043744  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.043968  783699 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:06:00.044079  783699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:06:00.044137  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:06:00.044197  783699 ssh_runner.go:195] Run: cat /version.json
	I1007 13:06:00.044225  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:06:00.047010  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047099  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047499  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:00.047527  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047557  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:00.047580  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:00.047744  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:06:00.047855  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:06:00.047985  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:06:00.048054  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:06:00.048127  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:06:00.048213  783699 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:06:00.048231  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:06:00.048312  783699 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:06:00.127244  783699 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1007 13:06:00.127486  783699 ssh_runner.go:195] Run: systemctl --version
	I1007 13:06:00.152127  783699 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1007 13:06:00.152296  783699 command_runner.go:130] > systemd 252 (252)
	I1007 13:06:00.152329  783699 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1007 13:06:00.152399  783699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:06:00.323524  783699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:06:00.340655  783699 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1007 13:06:00.340784  783699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:06:00.340850  783699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:06:00.354200  783699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:06:00.354232  783699 start.go:495] detecting cgroup driver to use...
	I1007 13:06:00.354315  783699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:06:00.378061  783699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:06:00.395558  783699 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:06:00.395624  783699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:06:00.413561  783699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:06:00.430094  783699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:06:00.587019  783699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:06:00.739463  783699 docker.go:233] disabling docker service ...
	I1007 13:06:00.739555  783699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:06:00.759239  783699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:06:00.775069  783699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:06:00.922268  783699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:06:01.069009  783699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:06:01.085523  783699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:06:01.106139  783699 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1007 13:06:01.106581  783699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:06:01.106658  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.119002  783699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:06:01.119083  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.131221  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.142937  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.154663  783699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:06:01.166258  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.177809  783699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.189505  783699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:01.201249  783699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:06:01.211805  783699 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1007 13:06:01.211893  783699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:06:01.222500  783699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:01.375887  783699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:06:01.621493  783699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:06:01.621566  783699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:06:01.626610  783699 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1007 13:06:01.626637  783699 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1007 13:06:01.626647  783699 command_runner.go:130] > Device: 0,22	Inode: 1342        Links: 1
	I1007 13:06:01.626657  783699 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 13:06:01.626675  783699 command_runner.go:130] > Access: 2024-10-07 13:06:01.534557584 +0000
	I1007 13:06:01.626685  783699 command_runner.go:130] > Modify: 2024-10-07 13:06:01.458555918 +0000
	I1007 13:06:01.626694  783699 command_runner.go:130] > Change: 2024-10-07 13:06:01.458555918 +0000
	I1007 13:06:01.626703  783699 command_runner.go:130] >  Birth: -
	I1007 13:06:01.626833  783699 start.go:563] Will wait 60s for crictl version
	I1007 13:06:01.626901  783699 ssh_runner.go:195] Run: which crictl
	I1007 13:06:01.631301  783699 command_runner.go:130] > /usr/bin/crictl
	I1007 13:06:01.631378  783699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:06:01.676219  783699 command_runner.go:130] > Version:  0.1.0
	I1007 13:06:01.676245  783699 command_runner.go:130] > RuntimeName:  cri-o
	I1007 13:06:01.676249  783699 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1007 13:06:01.676256  783699 command_runner.go:130] > RuntimeApiVersion:  v1
	I1007 13:06:01.677366  783699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:06:01.677462  783699 ssh_runner.go:195] Run: crio --version
	I1007 13:06:01.706679  783699 command_runner.go:130] > crio version 1.29.1
	I1007 13:06:01.706703  783699 command_runner.go:130] > Version:        1.29.1
	I1007 13:06:01.706708  783699 command_runner.go:130] > GitCommit:      unknown
	I1007 13:06:01.706713  783699 command_runner.go:130] > GitCommitDate:  unknown
	I1007 13:06:01.706717  783699 command_runner.go:130] > GitTreeState:   clean
	I1007 13:06:01.706723  783699 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 13:06:01.706727  783699 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 13:06:01.706731  783699 command_runner.go:130] > Compiler:       gc
	I1007 13:06:01.706736  783699 command_runner.go:130] > Platform:       linux/amd64
	I1007 13:06:01.706740  783699 command_runner.go:130] > Linkmode:       dynamic
	I1007 13:06:01.706754  783699 command_runner.go:130] > BuildTags:      
	I1007 13:06:01.706758  783699 command_runner.go:130] >   containers_image_ostree_stub
	I1007 13:06:01.706762  783699 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 13:06:01.706766  783699 command_runner.go:130] >   btrfs_noversion
	I1007 13:06:01.706771  783699 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 13:06:01.706775  783699 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 13:06:01.706779  783699 command_runner.go:130] >   seccomp
	I1007 13:06:01.706783  783699 command_runner.go:130] > LDFlags:          unknown
	I1007 13:06:01.706788  783699 command_runner.go:130] > SeccompEnabled:   true
	I1007 13:06:01.706796  783699 command_runner.go:130] > AppArmorEnabled:  false
	I1007 13:06:01.708057  783699 ssh_runner.go:195] Run: crio --version
	I1007 13:06:01.739495  783699 command_runner.go:130] > crio version 1.29.1
	I1007 13:06:01.739522  783699 command_runner.go:130] > Version:        1.29.1
	I1007 13:06:01.739529  783699 command_runner.go:130] > GitCommit:      unknown
	I1007 13:06:01.739533  783699 command_runner.go:130] > GitCommitDate:  unknown
	I1007 13:06:01.739538  783699 command_runner.go:130] > GitTreeState:   clean
	I1007 13:06:01.739543  783699 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 13:06:01.739547  783699 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 13:06:01.739551  783699 command_runner.go:130] > Compiler:       gc
	I1007 13:06:01.739556  783699 command_runner.go:130] > Platform:       linux/amd64
	I1007 13:06:01.739563  783699 command_runner.go:130] > Linkmode:       dynamic
	I1007 13:06:01.739570  783699 command_runner.go:130] > BuildTags:      
	I1007 13:06:01.739575  783699 command_runner.go:130] >   containers_image_ostree_stub
	I1007 13:06:01.739582  783699 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 13:06:01.739588  783699 command_runner.go:130] >   btrfs_noversion
	I1007 13:06:01.739596  783699 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 13:06:01.739603  783699 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 13:06:01.739610  783699 command_runner.go:130] >   seccomp
	I1007 13:06:01.739620  783699 command_runner.go:130] > LDFlags:          unknown
	I1007 13:06:01.739627  783699 command_runner.go:130] > SeccompEnabled:   true
	I1007 13:06:01.739634  783699 command_runner.go:130] > AppArmorEnabled:  false
	I1007 13:06:01.742800  783699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:06:01.744347  783699 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:06:01.747228  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:01.747670  783699 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:06:01.747711  783699 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:06:01.747968  783699 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 13:06:01.752393  783699 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1007 13:06:01.752509  783699 kubeadm.go:883] updating cluster {Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:06:01.752675  783699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:06:01.752729  783699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:01.798518  783699 command_runner.go:130] > {
	I1007 13:06:01.798551  783699 command_runner.go:130] >   "images": [
	I1007 13:06:01.798557  783699 command_runner.go:130] >     {
	I1007 13:06:01.798568  783699 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 13:06:01.798575  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798586  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 13:06:01.798592  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798598  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798610  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 13:06:01.798621  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 13:06:01.798627  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798632  783699 command_runner.go:130] >       "size": "87190579",
	I1007 13:06:01.798636  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798640  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798646  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798650  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798653  783699 command_runner.go:130] >     },
	I1007 13:06:01.798657  783699 command_runner.go:130] >     {
	I1007 13:06:01.798663  783699 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 13:06:01.798671  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798676  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 13:06:01.798681  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798685  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798694  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 13:06:01.798705  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 13:06:01.798711  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798716  783699 command_runner.go:130] >       "size": "1363676",
	I1007 13:06:01.798723  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798730  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798734  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798740  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798743  783699 command_runner.go:130] >     },
	I1007 13:06:01.798747  783699 command_runner.go:130] >     {
	I1007 13:06:01.798757  783699 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 13:06:01.798762  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798766  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 13:06:01.798770  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798774  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798781  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 13:06:01.798789  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 13:06:01.798793  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798797  783699 command_runner.go:130] >       "size": "31470524",
	I1007 13:06:01.798802  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798806  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798809  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798814  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798820  783699 command_runner.go:130] >     },
	I1007 13:06:01.798825  783699 command_runner.go:130] >     {
	I1007 13:06:01.798831  783699 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 13:06:01.798835  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798841  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 13:06:01.798844  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798850  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798856  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 13:06:01.798871  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 13:06:01.798875  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798878  783699 command_runner.go:130] >       "size": "63273227",
	I1007 13:06:01.798882  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.798886  783699 command_runner.go:130] >       "username": "nonroot",
	I1007 13:06:01.798891  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798895  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798899  783699 command_runner.go:130] >     },
	I1007 13:06:01.798903  783699 command_runner.go:130] >     {
	I1007 13:06:01.798909  783699 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 13:06:01.798915  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.798920  783699 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 13:06:01.798927  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798931  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.798940  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 13:06:01.798947  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 13:06:01.798954  783699 command_runner.go:130] >       ],
	I1007 13:06:01.798959  783699 command_runner.go:130] >       "size": "149009664",
	I1007 13:06:01.798965  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.798969  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.798973  783699 command_runner.go:130] >       },
	I1007 13:06:01.798977  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.798981  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.798985  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.798991  783699 command_runner.go:130] >     },
	I1007 13:06:01.798994  783699 command_runner.go:130] >     {
	I1007 13:06:01.799000  783699 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 13:06:01.799006  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799011  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 13:06:01.799014  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799018  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799025  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 13:06:01.799037  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 13:06:01.799044  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799050  783699 command_runner.go:130] >       "size": "95237600",
	I1007 13:06:01.799054  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799059  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.799063  783699 command_runner.go:130] >       },
	I1007 13:06:01.799069  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799073  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799078  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799082  783699 command_runner.go:130] >     },
	I1007 13:06:01.799085  783699 command_runner.go:130] >     {
	I1007 13:06:01.799093  783699 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 13:06:01.799097  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799104  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 13:06:01.799110  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799114  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799124  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 13:06:01.799134  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 13:06:01.799137  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799142  783699 command_runner.go:130] >       "size": "89437508",
	I1007 13:06:01.799146  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799150  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.799154  783699 command_runner.go:130] >       },
	I1007 13:06:01.799161  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799165  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799171  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799175  783699 command_runner.go:130] >     },
	I1007 13:06:01.799179  783699 command_runner.go:130] >     {
	I1007 13:06:01.799185  783699 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 13:06:01.799191  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799196  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 13:06:01.799201  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799205  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799221  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 13:06:01.799231  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 13:06:01.799234  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799241  783699 command_runner.go:130] >       "size": "92733849",
	I1007 13:06:01.799245  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.799252  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799256  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799259  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799262  783699 command_runner.go:130] >     },
	I1007 13:06:01.799265  783699 command_runner.go:130] >     {
	I1007 13:06:01.799271  783699 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 13:06:01.799274  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799279  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 13:06:01.799283  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799287  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799295  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 13:06:01.799302  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 13:06:01.799305  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799308  783699 command_runner.go:130] >       "size": "68420934",
	I1007 13:06:01.799312  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799315  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.799318  783699 command_runner.go:130] >       },
	I1007 13:06:01.799321  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799325  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799329  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.799332  783699 command_runner.go:130] >     },
	I1007 13:06:01.799335  783699 command_runner.go:130] >     {
	I1007 13:06:01.799340  783699 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 13:06:01.799344  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.799348  783699 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 13:06:01.799351  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799355  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.799361  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 13:06:01.799370  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 13:06:01.799376  783699 command_runner.go:130] >       ],
	I1007 13:06:01.799379  783699 command_runner.go:130] >       "size": "742080",
	I1007 13:06:01.799385  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.799390  783699 command_runner.go:130] >         "value": "65535"
	I1007 13:06:01.799396  783699 command_runner.go:130] >       },
	I1007 13:06:01.799400  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.799406  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.799410  783699 command_runner.go:130] >       "pinned": true
	I1007 13:06:01.799416  783699 command_runner.go:130] >     }
	I1007 13:06:01.799420  783699 command_runner.go:130] >   ]
	I1007 13:06:01.799425  783699 command_runner.go:130] > }
	I1007 13:06:01.799607  783699 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:01.799620  783699 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:06:01.799670  783699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:01.833915  783699 command_runner.go:130] > {
	I1007 13:06:01.833944  783699 command_runner.go:130] >   "images": [
	I1007 13:06:01.833949  783699 command_runner.go:130] >     {
	I1007 13:06:01.833957  783699 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 13:06:01.833962  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.833968  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 13:06:01.833971  783699 command_runner.go:130] >       ],
	I1007 13:06:01.833975  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.833983  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 13:06:01.833990  783699 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 13:06:01.833993  783699 command_runner.go:130] >       ],
	I1007 13:06:01.833998  783699 command_runner.go:130] >       "size": "87190579",
	I1007 13:06:01.834001  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834005  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834037  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834045  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834059  783699 command_runner.go:130] >     },
	I1007 13:06:01.834064  783699 command_runner.go:130] >     {
	I1007 13:06:01.834072  783699 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 13:06:01.834078  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834090  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 13:06:01.834096  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834103  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834110  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 13:06:01.834118  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 13:06:01.834122  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834128  783699 command_runner.go:130] >       "size": "1363676",
	I1007 13:06:01.834132  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834139  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834144  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834148  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834153  783699 command_runner.go:130] >     },
	I1007 13:06:01.834156  783699 command_runner.go:130] >     {
	I1007 13:06:01.834162  783699 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 13:06:01.834167  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834172  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 13:06:01.834175  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834179  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834189  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 13:06:01.834199  783699 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 13:06:01.834203  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834207  783699 command_runner.go:130] >       "size": "31470524",
	I1007 13:06:01.834210  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834214  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834218  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834222  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834225  783699 command_runner.go:130] >     },
	I1007 13:06:01.834229  783699 command_runner.go:130] >     {
	I1007 13:06:01.834235  783699 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 13:06:01.834240  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834244  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 13:06:01.834247  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834252  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834259  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 13:06:01.834270  783699 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 13:06:01.834274  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834278  783699 command_runner.go:130] >       "size": "63273227",
	I1007 13:06:01.834282  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834287  783699 command_runner.go:130] >       "username": "nonroot",
	I1007 13:06:01.834294  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834299  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834302  783699 command_runner.go:130] >     },
	I1007 13:06:01.834305  783699 command_runner.go:130] >     {
	I1007 13:06:01.834311  783699 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 13:06:01.834316  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834321  783699 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 13:06:01.834327  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834331  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834337  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 13:06:01.834345  783699 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 13:06:01.834349  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834353  783699 command_runner.go:130] >       "size": "149009664",
	I1007 13:06:01.834357  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834361  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834364  783699 command_runner.go:130] >       },
	I1007 13:06:01.834368  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834372  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834377  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834380  783699 command_runner.go:130] >     },
	I1007 13:06:01.834384  783699 command_runner.go:130] >     {
	I1007 13:06:01.834390  783699 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 13:06:01.834394  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834399  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 13:06:01.834403  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834407  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834417  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 13:06:01.834424  783699 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 13:06:01.834429  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834433  783699 command_runner.go:130] >       "size": "95237600",
	I1007 13:06:01.834439  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834444  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834447  783699 command_runner.go:130] >       },
	I1007 13:06:01.834451  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834455  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834460  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834464  783699 command_runner.go:130] >     },
	I1007 13:06:01.834467  783699 command_runner.go:130] >     {
	I1007 13:06:01.834473  783699 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 13:06:01.834478  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834483  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 13:06:01.834486  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834491  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834498  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 13:06:01.834507  783699 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 13:06:01.834513  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834517  783699 command_runner.go:130] >       "size": "89437508",
	I1007 13:06:01.834520  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834524  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834528  783699 command_runner.go:130] >       },
	I1007 13:06:01.834532  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834537  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834541  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834546  783699 command_runner.go:130] >     },
	I1007 13:06:01.834549  783699 command_runner.go:130] >     {
	I1007 13:06:01.834555  783699 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 13:06:01.834561  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834566  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 13:06:01.834571  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834574  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834589  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 13:06:01.834599  783699 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 13:06:01.834603  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834607  783699 command_runner.go:130] >       "size": "92733849",
	I1007 13:06:01.834611  783699 command_runner.go:130] >       "uid": null,
	I1007 13:06:01.834617  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834621  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834625  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834629  783699 command_runner.go:130] >     },
	I1007 13:06:01.834632  783699 command_runner.go:130] >     {
	I1007 13:06:01.834638  783699 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 13:06:01.834651  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834658  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 13:06:01.834662  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834668  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834675  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 13:06:01.834684  783699 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 13:06:01.834687  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834691  783699 command_runner.go:130] >       "size": "68420934",
	I1007 13:06:01.834695  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834699  783699 command_runner.go:130] >         "value": "0"
	I1007 13:06:01.834703  783699 command_runner.go:130] >       },
	I1007 13:06:01.834707  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834713  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834718  783699 command_runner.go:130] >       "pinned": false
	I1007 13:06:01.834721  783699 command_runner.go:130] >     },
	I1007 13:06:01.834725  783699 command_runner.go:130] >     {
	I1007 13:06:01.834730  783699 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 13:06:01.834736  783699 command_runner.go:130] >       "repoTags": [
	I1007 13:06:01.834740  783699 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 13:06:01.834744  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834748  783699 command_runner.go:130] >       "repoDigests": [
	I1007 13:06:01.834756  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 13:06:01.834765  783699 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 13:06:01.834769  783699 command_runner.go:130] >       ],
	I1007 13:06:01.834772  783699 command_runner.go:130] >       "size": "742080",
	I1007 13:06:01.834779  783699 command_runner.go:130] >       "uid": {
	I1007 13:06:01.834783  783699 command_runner.go:130] >         "value": "65535"
	I1007 13:06:01.834787  783699 command_runner.go:130] >       },
	I1007 13:06:01.834791  783699 command_runner.go:130] >       "username": "",
	I1007 13:06:01.834794  783699 command_runner.go:130] >       "spec": null,
	I1007 13:06:01.834798  783699 command_runner.go:130] >       "pinned": true
	I1007 13:06:01.834803  783699 command_runner.go:130] >     }
	I1007 13:06:01.834808  783699 command_runner.go:130] >   ]
	I1007 13:06:01.834812  783699 command_runner.go:130] > }
	I1007 13:06:01.835649  783699 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:01.835672  783699 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:06:01.835681  783699 kubeadm.go:934] updating node { 192.168.39.213 8443 v1.31.1 crio true true} ...
	I1007 13:06:01.835785  783699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-723069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:06:01.835859  783699 ssh_runner.go:195] Run: crio config
	I1007 13:06:01.872651  783699 command_runner.go:130] ! time="2024-10-07 13:06:01.852324614Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1007 13:06:01.878179  783699 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1007 13:06:01.886755  783699 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1007 13:06:01.886787  783699 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1007 13:06:01.886797  783699 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1007 13:06:01.886807  783699 command_runner.go:130] > #
	I1007 13:06:01.886817  783699 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1007 13:06:01.886826  783699 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1007 13:06:01.886833  783699 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1007 13:06:01.886842  783699 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1007 13:06:01.886846  783699 command_runner.go:130] > # reload'.
	I1007 13:06:01.886856  783699 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1007 13:06:01.886865  783699 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1007 13:06:01.886875  783699 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1007 13:06:01.886887  783699 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1007 13:06:01.886909  783699 command_runner.go:130] > [crio]
	I1007 13:06:01.886919  783699 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1007 13:06:01.886923  783699 command_runner.go:130] > # containers images, in this directory.
	I1007 13:06:01.886928  783699 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1007 13:06:01.886939  783699 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1007 13:06:01.886945  783699 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1007 13:06:01.886953  783699 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1007 13:06:01.886959  783699 command_runner.go:130] > # imagestore = ""
	I1007 13:06:01.886965  783699 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1007 13:06:01.886973  783699 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1007 13:06:01.886977  783699 command_runner.go:130] > storage_driver = "overlay"
	I1007 13:06:01.886983  783699 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1007 13:06:01.886988  783699 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1007 13:06:01.886993  783699 command_runner.go:130] > storage_option = [
	I1007 13:06:01.886997  783699 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1007 13:06:01.887000  783699 command_runner.go:130] > ]
	I1007 13:06:01.887006  783699 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1007 13:06:01.887014  783699 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1007 13:06:01.887018  783699 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1007 13:06:01.887023  783699 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1007 13:06:01.887031  783699 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1007 13:06:01.887035  783699 command_runner.go:130] > # always happen on a node reboot
	I1007 13:06:01.887040  783699 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1007 13:06:01.887053  783699 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1007 13:06:01.887061  783699 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1007 13:06:01.887066  783699 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1007 13:06:01.887072  783699 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1007 13:06:01.887081  783699 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1007 13:06:01.887088  783699 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1007 13:06:01.887094  783699 command_runner.go:130] > # internal_wipe = true
	I1007 13:06:01.887102  783699 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1007 13:06:01.887109  783699 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1007 13:06:01.887113  783699 command_runner.go:130] > # internal_repair = false
	I1007 13:06:01.887120  783699 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1007 13:06:01.887126  783699 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1007 13:06:01.887133  783699 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1007 13:06:01.887139  783699 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1007 13:06:01.887149  783699 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1007 13:06:01.887156  783699 command_runner.go:130] > [crio.api]
	I1007 13:06:01.887161  783699 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1007 13:06:01.887168  783699 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1007 13:06:01.887173  783699 command_runner.go:130] > # IP address on which the stream server will listen.
	I1007 13:06:01.887181  783699 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1007 13:06:01.887189  783699 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1007 13:06:01.887196  783699 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1007 13:06:01.887200  783699 command_runner.go:130] > # stream_port = "0"
	I1007 13:06:01.887207  783699 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1007 13:06:01.887212  783699 command_runner.go:130] > # stream_enable_tls = false
	I1007 13:06:01.887220  783699 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1007 13:06:01.887227  783699 command_runner.go:130] > # stream_idle_timeout = ""
	I1007 13:06:01.887233  783699 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1007 13:06:01.887244  783699 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1007 13:06:01.887249  783699 command_runner.go:130] > # minutes.
	I1007 13:06:01.887253  783699 command_runner.go:130] > # stream_tls_cert = ""
	I1007 13:06:01.887261  783699 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1007 13:06:01.887268  783699 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1007 13:06:01.887274  783699 command_runner.go:130] > # stream_tls_key = ""
	I1007 13:06:01.887280  783699 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1007 13:06:01.887288  783699 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1007 13:06:01.887302  783699 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1007 13:06:01.887308  783699 command_runner.go:130] > # stream_tls_ca = ""
	I1007 13:06:01.887315  783699 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 13:06:01.887322  783699 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1007 13:06:01.887330  783699 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 13:06:01.887337  783699 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1007 13:06:01.887343  783699 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1007 13:06:01.887351  783699 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1007 13:06:01.887357  783699 command_runner.go:130] > [crio.runtime]
	I1007 13:06:01.887363  783699 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1007 13:06:01.887370  783699 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1007 13:06:01.887376  783699 command_runner.go:130] > # "nofile=1024:2048"
	I1007 13:06:01.887381  783699 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1007 13:06:01.887387  783699 command_runner.go:130] > # default_ulimits = [
	I1007 13:06:01.887391  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887397  783699 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1007 13:06:01.887404  783699 command_runner.go:130] > # no_pivot = false
	I1007 13:06:01.887412  783699 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1007 13:06:01.887420  783699 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1007 13:06:01.887425  783699 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1007 13:06:01.887431  783699 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1007 13:06:01.887437  783699 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1007 13:06:01.887443  783699 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 13:06:01.887450  783699 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1007 13:06:01.887454  783699 command_runner.go:130] > # Cgroup setting for conmon
	I1007 13:06:01.887463  783699 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1007 13:06:01.887469  783699 command_runner.go:130] > conmon_cgroup = "pod"
	I1007 13:06:01.887475  783699 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1007 13:06:01.887482  783699 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1007 13:06:01.887488  783699 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 13:06:01.887494  783699 command_runner.go:130] > conmon_env = [
	I1007 13:06:01.887499  783699 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 13:06:01.887504  783699 command_runner.go:130] > ]
	I1007 13:06:01.887509  783699 command_runner.go:130] > # Additional environment variables to set for all the
	I1007 13:06:01.887518  783699 command_runner.go:130] > # containers. These are overridden if set in the
	I1007 13:06:01.887526  783699 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1007 13:06:01.887532  783699 command_runner.go:130] > # default_env = [
	I1007 13:06:01.887536  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887544  783699 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1007 13:06:01.887552  783699 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1007 13:06:01.887558  783699 command_runner.go:130] > # selinux = false
	I1007 13:06:01.887564  783699 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1007 13:06:01.887572  783699 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1007 13:06:01.887580  783699 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1007 13:06:01.887587  783699 command_runner.go:130] > # seccomp_profile = ""
	I1007 13:06:01.887592  783699 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1007 13:06:01.887599  783699 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1007 13:06:01.887605  783699 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1007 13:06:01.887625  783699 command_runner.go:130] > # which might increase security.
	I1007 13:06:01.887638  783699 command_runner.go:130] > # This option is currently deprecated,
	I1007 13:06:01.887644  783699 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1007 13:06:01.887648  783699 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1007 13:06:01.887654  783699 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1007 13:06:01.887663  783699 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1007 13:06:01.887673  783699 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1007 13:06:01.887682  783699 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1007 13:06:01.887689  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.887693  783699 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1007 13:06:01.887701  783699 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1007 13:06:01.887707  783699 command_runner.go:130] > # the cgroup blockio controller.
	I1007 13:06:01.887713  783699 command_runner.go:130] > # blockio_config_file = ""
	I1007 13:06:01.887720  783699 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1007 13:06:01.887726  783699 command_runner.go:130] > # blockio parameters.
	I1007 13:06:01.887730  783699 command_runner.go:130] > # blockio_reload = false
	I1007 13:06:01.887738  783699 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1007 13:06:01.887744  783699 command_runner.go:130] > # irqbalance daemon.
	I1007 13:06:01.887750  783699 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1007 13:06:01.887759  783699 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1007 13:06:01.887767  783699 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1007 13:06:01.887774  783699 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1007 13:06:01.887781  783699 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1007 13:06:01.887787  783699 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1007 13:06:01.887794  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.887805  783699 command_runner.go:130] > # rdt_config_file = ""
	I1007 13:06:01.887812  783699 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1007 13:06:01.887816  783699 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1007 13:06:01.887836  783699 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1007 13:06:01.887842  783699 command_runner.go:130] > # separate_pull_cgroup = ""
	I1007 13:06:01.887848  783699 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1007 13:06:01.887857  783699 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1007 13:06:01.887863  783699 command_runner.go:130] > # will be added.
	I1007 13:06:01.887867  783699 command_runner.go:130] > # default_capabilities = [
	I1007 13:06:01.887873  783699 command_runner.go:130] > # 	"CHOWN",
	I1007 13:06:01.887877  783699 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1007 13:06:01.887883  783699 command_runner.go:130] > # 	"FSETID",
	I1007 13:06:01.887887  783699 command_runner.go:130] > # 	"FOWNER",
	I1007 13:06:01.887892  783699 command_runner.go:130] > # 	"SETGID",
	I1007 13:06:01.887896  783699 command_runner.go:130] > # 	"SETUID",
	I1007 13:06:01.887901  783699 command_runner.go:130] > # 	"SETPCAP",
	I1007 13:06:01.887907  783699 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1007 13:06:01.887911  783699 command_runner.go:130] > # 	"KILL",
	I1007 13:06:01.887917  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887925  783699 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1007 13:06:01.887933  783699 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1007 13:06:01.887943  783699 command_runner.go:130] > # add_inheritable_capabilities = false
	I1007 13:06:01.887950  783699 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1007 13:06:01.887955  783699 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 13:06:01.887962  783699 command_runner.go:130] > default_sysctls = [
	I1007 13:06:01.887966  783699 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1007 13:06:01.887970  783699 command_runner.go:130] > ]
	I1007 13:06:01.887975  783699 command_runner.go:130] > # List of devices on the host that a
	I1007 13:06:01.887981  783699 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1007 13:06:01.887985  783699 command_runner.go:130] > # allowed_devices = [
	I1007 13:06:01.887989  783699 command_runner.go:130] > # 	"/dev/fuse",
	I1007 13:06:01.887992  783699 command_runner.go:130] > # ]
	I1007 13:06:01.887998  783699 command_runner.go:130] > # List of additional devices. specified as
	I1007 13:06:01.888007  783699 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1007 13:06:01.888012  783699 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1007 13:06:01.888017  783699 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 13:06:01.888021  783699 command_runner.go:130] > # additional_devices = [
	I1007 13:06:01.888027  783699 command_runner.go:130] > # ]
	I1007 13:06:01.888032  783699 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1007 13:06:01.888038  783699 command_runner.go:130] > # cdi_spec_dirs = [
	I1007 13:06:01.888042  783699 command_runner.go:130] > # 	"/etc/cdi",
	I1007 13:06:01.888047  783699 command_runner.go:130] > # 	"/var/run/cdi",
	I1007 13:06:01.888054  783699 command_runner.go:130] > # ]
	I1007 13:06:01.888062  783699 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1007 13:06:01.888069  783699 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1007 13:06:01.888075  783699 command_runner.go:130] > # Defaults to false.
	I1007 13:06:01.888081  783699 command_runner.go:130] > # device_ownership_from_security_context = false
	I1007 13:06:01.888089  783699 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1007 13:06:01.888097  783699 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1007 13:06:01.888103  783699 command_runner.go:130] > # hooks_dir = [
	I1007 13:06:01.888108  783699 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1007 13:06:01.888113  783699 command_runner.go:130] > # ]
	I1007 13:06:01.888119  783699 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1007 13:06:01.888127  783699 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1007 13:06:01.888133  783699 command_runner.go:130] > # its default mounts from the following two files:
	I1007 13:06:01.888139  783699 command_runner.go:130] > #
	I1007 13:06:01.888145  783699 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1007 13:06:01.888154  783699 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1007 13:06:01.888160  783699 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1007 13:06:01.888165  783699 command_runner.go:130] > #
	I1007 13:06:01.888171  783699 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1007 13:06:01.888180  783699 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1007 13:06:01.888188  783699 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1007 13:06:01.888199  783699 command_runner.go:130] > #      only add mounts it finds in this file.
	I1007 13:06:01.888204  783699 command_runner.go:130] > #
	I1007 13:06:01.888209  783699 command_runner.go:130] > # default_mounts_file = ""
	I1007 13:06:01.888217  783699 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1007 13:06:01.888226  783699 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1007 13:06:01.888230  783699 command_runner.go:130] > pids_limit = 1024
	I1007 13:06:01.888237  783699 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1007 13:06:01.888246  783699 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1007 13:06:01.888252  783699 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1007 13:06:01.888263  783699 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1007 13:06:01.888271  783699 command_runner.go:130] > # log_size_max = -1
	I1007 13:06:01.888282  783699 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1007 13:06:01.888291  783699 command_runner.go:130] > # log_to_journald = false
	I1007 13:06:01.888303  783699 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1007 13:06:01.888314  783699 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1007 13:06:01.888325  783699 command_runner.go:130] > # Path to directory for container attach sockets.
	I1007 13:06:01.888335  783699 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1007 13:06:01.888346  783699 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1007 13:06:01.888355  783699 command_runner.go:130] > # bind_mount_prefix = ""
	I1007 13:06:01.888367  783699 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1007 13:06:01.888376  783699 command_runner.go:130] > # read_only = false
	I1007 13:06:01.888388  783699 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1007 13:06:01.888400  783699 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1007 13:06:01.888409  783699 command_runner.go:130] > # live configuration reload.
	I1007 13:06:01.888416  783699 command_runner.go:130] > # log_level = "info"
	I1007 13:06:01.888427  783699 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1007 13:06:01.888438  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.888449  783699 command_runner.go:130] > # log_filter = ""
	I1007 13:06:01.888461  783699 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1007 13:06:01.888478  783699 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1007 13:06:01.888488  783699 command_runner.go:130] > # separated by comma.
	I1007 13:06:01.888500  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888510  783699 command_runner.go:130] > # uid_mappings = ""
	I1007 13:06:01.888522  783699 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1007 13:06:01.888535  783699 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1007 13:06:01.888544  783699 command_runner.go:130] > # separated by comma.
	I1007 13:06:01.888559  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888571  783699 command_runner.go:130] > # gid_mappings = ""
	I1007 13:06:01.888584  783699 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1007 13:06:01.888596  783699 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 13:06:01.888608  783699 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 13:06:01.888622  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888633  783699 command_runner.go:130] > # minimum_mappable_uid = -1
	I1007 13:06:01.888644  783699 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1007 13:06:01.888658  783699 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 13:06:01.888672  783699 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 13:06:01.888686  783699 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 13:06:01.888696  783699 command_runner.go:130] > # minimum_mappable_gid = -1
	I1007 13:06:01.888708  783699 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1007 13:06:01.888721  783699 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1007 13:06:01.888733  783699 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1007 13:06:01.888742  783699 command_runner.go:130] > # ctr_stop_timeout = 30
	I1007 13:06:01.888751  783699 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1007 13:06:01.888763  783699 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1007 13:06:01.888773  783699 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1007 13:06:01.888784  783699 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1007 13:06:01.888793  783699 command_runner.go:130] > drop_infra_ctr = false
	I1007 13:06:01.888810  783699 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1007 13:06:01.888822  783699 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1007 13:06:01.888835  783699 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1007 13:06:01.888846  783699 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1007 13:06:01.888859  783699 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1007 13:06:01.888872  783699 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1007 13:06:01.888884  783699 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1007 13:06:01.888896  783699 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1007 13:06:01.888904  783699 command_runner.go:130] > # shared_cpuset = ""
	I1007 13:06:01.888916  783699 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1007 13:06:01.888927  783699 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1007 13:06:01.888937  783699 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1007 13:06:01.888951  783699 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1007 13:06:01.888960  783699 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1007 13:06:01.888969  783699 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1007 13:06:01.888984  783699 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1007 13:06:01.888994  783699 command_runner.go:130] > # enable_criu_support = false
	I1007 13:06:01.889002  783699 command_runner.go:130] > # Enable/disable the generation of the container,
	I1007 13:06:01.889014  783699 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1007 13:06:01.889021  783699 command_runner.go:130] > # enable_pod_events = false
	I1007 13:06:01.889034  783699 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 13:06:01.889048  783699 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 13:06:01.889060  783699 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1007 13:06:01.889070  783699 command_runner.go:130] > # default_runtime = "runc"
	I1007 13:06:01.889081  783699 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1007 13:06:01.889093  783699 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1007 13:06:01.889110  783699 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1007 13:06:01.889121  783699 command_runner.go:130] > # creation as a file is not desired either.
	I1007 13:06:01.889136  783699 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1007 13:06:01.889149  783699 command_runner.go:130] > # the hostname is being managed dynamically.
	I1007 13:06:01.889158  783699 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1007 13:06:01.889166  783699 command_runner.go:130] > # ]
	I1007 13:06:01.889177  783699 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1007 13:06:01.889189  783699 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1007 13:06:01.889202  783699 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1007 13:06:01.889212  783699 command_runner.go:130] > # Each entry in the table should follow the format:
	I1007 13:06:01.889220  783699 command_runner.go:130] > #
	I1007 13:06:01.889227  783699 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1007 13:06:01.889237  783699 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1007 13:06:01.889263  783699 command_runner.go:130] > # runtime_type = "oci"
	I1007 13:06:01.889273  783699 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1007 13:06:01.889277  783699 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1007 13:06:01.889284  783699 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1007 13:06:01.889288  783699 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1007 13:06:01.889294  783699 command_runner.go:130] > # monitor_env = []
	I1007 13:06:01.889299  783699 command_runner.go:130] > # privileged_without_host_devices = false
	I1007 13:06:01.889305  783699 command_runner.go:130] > # allowed_annotations = []
	I1007 13:06:01.889310  783699 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1007 13:06:01.889315  783699 command_runner.go:130] > # Where:
	I1007 13:06:01.889320  783699 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1007 13:06:01.889328  783699 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1007 13:06:01.889336  783699 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1007 13:06:01.889344  783699 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1007 13:06:01.889353  783699 command_runner.go:130] > #   in $PATH.
	I1007 13:06:01.889361  783699 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1007 13:06:01.889366  783699 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1007 13:06:01.889374  783699 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1007 13:06:01.889381  783699 command_runner.go:130] > #   state.
	I1007 13:06:01.889387  783699 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1007 13:06:01.889395  783699 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1007 13:06:01.889402  783699 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1007 13:06:01.889407  783699 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1007 13:06:01.889415  783699 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1007 13:06:01.889424  783699 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1007 13:06:01.889429  783699 command_runner.go:130] > #   The currently recognized values are:
	I1007 13:06:01.889437  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1007 13:06:01.889447  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1007 13:06:01.889453  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1007 13:06:01.889461  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1007 13:06:01.889468  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1007 13:06:01.889476  783699 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1007 13:06:01.889485  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1007 13:06:01.889493  783699 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1007 13:06:01.889501  783699 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1007 13:06:01.889509  783699 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1007 13:06:01.889515  783699 command_runner.go:130] > #   deprecated option "conmon".
	I1007 13:06:01.889522  783699 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1007 13:06:01.889529  783699 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1007 13:06:01.889535  783699 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1007 13:06:01.889542  783699 command_runner.go:130] > #   should be moved to the container's cgroup
	I1007 13:06:01.889549  783699 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1007 13:06:01.889555  783699 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1007 13:06:01.889562  783699 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1007 13:06:01.889569  783699 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1007 13:06:01.889572  783699 command_runner.go:130] > #
	I1007 13:06:01.889577  783699 command_runner.go:130] > # Using the seccomp notifier feature:
	I1007 13:06:01.889585  783699 command_runner.go:130] > #
	I1007 13:06:01.889594  783699 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1007 13:06:01.889600  783699 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1007 13:06:01.889606  783699 command_runner.go:130] > #
	I1007 13:06:01.889611  783699 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1007 13:06:01.889619  783699 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1007 13:06:01.889622  783699 command_runner.go:130] > #
	I1007 13:06:01.889630  783699 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1007 13:06:01.889636  783699 command_runner.go:130] > # feature.
	I1007 13:06:01.889639  783699 command_runner.go:130] > #
	I1007 13:06:01.889646  783699 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1007 13:06:01.889654  783699 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1007 13:06:01.889660  783699 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1007 13:06:01.889668  783699 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1007 13:06:01.889676  783699 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1007 13:06:01.889679  783699 command_runner.go:130] > #
	I1007 13:06:01.889687  783699 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1007 13:06:01.889695  783699 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1007 13:06:01.889699  783699 command_runner.go:130] > #
	I1007 13:06:01.889705  783699 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1007 13:06:01.889713  783699 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1007 13:06:01.889718  783699 command_runner.go:130] > #
	I1007 13:06:01.889724  783699 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1007 13:06:01.889731  783699 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1007 13:06:01.889737  783699 command_runner.go:130] > # limitation.
	I1007 13:06:01.889744  783699 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1007 13:06:01.889750  783699 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1007 13:06:01.889754  783699 command_runner.go:130] > runtime_type = "oci"
	I1007 13:06:01.889760  783699 command_runner.go:130] > runtime_root = "/run/runc"
	I1007 13:06:01.889764  783699 command_runner.go:130] > runtime_config_path = ""
	I1007 13:06:01.889772  783699 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1007 13:06:01.889776  783699 command_runner.go:130] > monitor_cgroup = "pod"
	I1007 13:06:01.889783  783699 command_runner.go:130] > monitor_exec_cgroup = ""
	I1007 13:06:01.889786  783699 command_runner.go:130] > monitor_env = [
	I1007 13:06:01.889792  783699 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 13:06:01.889797  783699 command_runner.go:130] > ]
	I1007 13:06:01.889807  783699 command_runner.go:130] > privileged_without_host_devices = false
	I1007 13:06:01.889813  783699 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1007 13:06:01.889821  783699 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1007 13:06:01.889829  783699 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1007 13:06:01.889838  783699 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1007 13:06:01.889850  783699 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1007 13:06:01.889858  783699 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1007 13:06:01.889868  783699 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1007 13:06:01.889878  783699 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1007 13:06:01.889884  783699 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1007 13:06:01.889893  783699 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1007 13:06:01.889899  783699 command_runner.go:130] > # Example:
	I1007 13:06:01.889903  783699 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1007 13:06:01.889911  783699 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1007 13:06:01.889915  783699 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1007 13:06:01.889922  783699 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1007 13:06:01.889926  783699 command_runner.go:130] > # cpuset = 0
	I1007 13:06:01.889932  783699 command_runner.go:130] > # cpushares = "0-1"
	I1007 13:06:01.889935  783699 command_runner.go:130] > # Where:
	I1007 13:06:01.889942  783699 command_runner.go:130] > # The workload name is workload-type.
	I1007 13:06:01.889948  783699 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1007 13:06:01.889956  783699 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1007 13:06:01.889961  783699 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1007 13:06:01.889968  783699 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1007 13:06:01.889976  783699 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1007 13:06:01.889980  783699 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1007 13:06:01.889989  783699 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1007 13:06:01.889993  783699 command_runner.go:130] > # Default value is set to true
	I1007 13:06:01.889999  783699 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1007 13:06:01.890004  783699 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1007 13:06:01.890011  783699 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1007 13:06:01.890020  783699 command_runner.go:130] > # Default value is set to 'false'
	I1007 13:06:01.890035  783699 command_runner.go:130] > # disable_hostport_mapping = false
	I1007 13:06:01.890042  783699 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1007 13:06:01.890046  783699 command_runner.go:130] > #
	I1007 13:06:01.890051  783699 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1007 13:06:01.890057  783699 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1007 13:06:01.890062  783699 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1007 13:06:01.890068  783699 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1007 13:06:01.890076  783699 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1007 13:06:01.890080  783699 command_runner.go:130] > [crio.image]
	I1007 13:06:01.890085  783699 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1007 13:06:01.890089  783699 command_runner.go:130] > # default_transport = "docker://"
	I1007 13:06:01.890095  783699 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1007 13:06:01.890100  783699 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1007 13:06:01.890104  783699 command_runner.go:130] > # global_auth_file = ""
	I1007 13:06:01.890109  783699 command_runner.go:130] > # The image used to instantiate infra containers.
	I1007 13:06:01.890113  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.890118  783699 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1007 13:06:01.890123  783699 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1007 13:06:01.890128  783699 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1007 13:06:01.890133  783699 command_runner.go:130] > # This option supports live configuration reload.
	I1007 13:06:01.890137  783699 command_runner.go:130] > # pause_image_auth_file = ""
	I1007 13:06:01.890142  783699 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1007 13:06:01.890147  783699 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1007 13:06:01.890153  783699 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1007 13:06:01.890158  783699 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1007 13:06:01.890162  783699 command_runner.go:130] > # pause_command = "/pause"
	I1007 13:06:01.890169  783699 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1007 13:06:01.890174  783699 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1007 13:06:01.890183  783699 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1007 13:06:01.890191  783699 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1007 13:06:01.890197  783699 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1007 13:06:01.890202  783699 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1007 13:06:01.890207  783699 command_runner.go:130] > # pinned_images = [
	I1007 13:06:01.890210  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890215  783699 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1007 13:06:01.890222  783699 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1007 13:06:01.890227  783699 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1007 13:06:01.890236  783699 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1007 13:06:01.890241  783699 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1007 13:06:01.890247  783699 command_runner.go:130] > # signature_policy = ""
	I1007 13:06:01.890252  783699 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1007 13:06:01.890260  783699 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1007 13:06:01.890267  783699 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1007 13:06:01.890279  783699 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1007 13:06:01.890287  783699 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1007 13:06:01.890294  783699 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1007 13:06:01.890300  783699 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1007 13:06:01.890308  783699 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1007 13:06:01.890315  783699 command_runner.go:130] > # changing them here.
	I1007 13:06:01.890319  783699 command_runner.go:130] > # insecure_registries = [
	I1007 13:06:01.890324  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890331  783699 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1007 13:06:01.890338  783699 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1007 13:06:01.890342  783699 command_runner.go:130] > # image_volumes = "mkdir"
	I1007 13:06:01.890347  783699 command_runner.go:130] > # Temporary directory to use for storing big files
	I1007 13:06:01.890355  783699 command_runner.go:130] > # big_files_temporary_dir = ""
	I1007 13:06:01.890360  783699 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1007 13:06:01.890367  783699 command_runner.go:130] > # CNI plugins.
	I1007 13:06:01.890371  783699 command_runner.go:130] > [crio.network]
	I1007 13:06:01.890379  783699 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1007 13:06:01.890384  783699 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1007 13:06:01.890390  783699 command_runner.go:130] > # cni_default_network = ""
	I1007 13:06:01.890396  783699 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1007 13:06:01.890402  783699 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1007 13:06:01.890408  783699 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1007 13:06:01.890415  783699 command_runner.go:130] > # plugin_dirs = [
	I1007 13:06:01.890419  783699 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1007 13:06:01.890425  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890431  783699 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1007 13:06:01.890436  783699 command_runner.go:130] > [crio.metrics]
	I1007 13:06:01.890441  783699 command_runner.go:130] > # Globally enable or disable metrics support.
	I1007 13:06:01.890445  783699 command_runner.go:130] > enable_metrics = true
	I1007 13:06:01.890451  783699 command_runner.go:130] > # Specify enabled metrics collectors.
	I1007 13:06:01.890456  783699 command_runner.go:130] > # Per default all metrics are enabled.
	I1007 13:06:01.890465  783699 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1007 13:06:01.890472  783699 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1007 13:06:01.890479  783699 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1007 13:06:01.890484  783699 command_runner.go:130] > # metrics_collectors = [
	I1007 13:06:01.890490  783699 command_runner.go:130] > # 	"operations",
	I1007 13:06:01.890494  783699 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1007 13:06:01.890501  783699 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1007 13:06:01.890505  783699 command_runner.go:130] > # 	"operations_errors",
	I1007 13:06:01.890510  783699 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1007 13:06:01.890514  783699 command_runner.go:130] > # 	"image_pulls_by_name",
	I1007 13:06:01.890520  783699 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1007 13:06:01.890527  783699 command_runner.go:130] > # 	"image_pulls_failures",
	I1007 13:06:01.890533  783699 command_runner.go:130] > # 	"image_pulls_successes",
	I1007 13:06:01.890538  783699 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1007 13:06:01.890544  783699 command_runner.go:130] > # 	"image_layer_reuse",
	I1007 13:06:01.890548  783699 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1007 13:06:01.890554  783699 command_runner.go:130] > # 	"containers_oom_total",
	I1007 13:06:01.890559  783699 command_runner.go:130] > # 	"containers_oom",
	I1007 13:06:01.890565  783699 command_runner.go:130] > # 	"processes_defunct",
	I1007 13:06:01.890569  783699 command_runner.go:130] > # 	"operations_total",
	I1007 13:06:01.890573  783699 command_runner.go:130] > # 	"operations_latency_seconds",
	I1007 13:06:01.890578  783699 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1007 13:06:01.890584  783699 command_runner.go:130] > # 	"operations_errors_total",
	I1007 13:06:01.890589  783699 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1007 13:06:01.890597  783699 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1007 13:06:01.890601  783699 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1007 13:06:01.890607  783699 command_runner.go:130] > # 	"image_pulls_success_total",
	I1007 13:06:01.890611  783699 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1007 13:06:01.890615  783699 command_runner.go:130] > # 	"containers_oom_count_total",
	I1007 13:06:01.890622  783699 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1007 13:06:01.890627  783699 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1007 13:06:01.890632  783699 command_runner.go:130] > # ]
	I1007 13:06:01.890637  783699 command_runner.go:130] > # The port on which the metrics server will listen.
	I1007 13:06:01.890643  783699 command_runner.go:130] > # metrics_port = 9090
	I1007 13:06:01.890648  783699 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1007 13:06:01.890654  783699 command_runner.go:130] > # metrics_socket = ""
	I1007 13:06:01.890659  783699 command_runner.go:130] > # The certificate for the secure metrics server.
	I1007 13:06:01.890667  783699 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1007 13:06:01.890674  783699 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1007 13:06:01.890681  783699 command_runner.go:130] > # certificate on any modification event.
	I1007 13:06:01.890685  783699 command_runner.go:130] > # metrics_cert = ""
	I1007 13:06:01.890691  783699 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1007 13:06:01.890696  783699 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1007 13:06:01.890703  783699 command_runner.go:130] > # metrics_key = ""
	I1007 13:06:01.890709  783699 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1007 13:06:01.890715  783699 command_runner.go:130] > [crio.tracing]
	I1007 13:06:01.890721  783699 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1007 13:06:01.890727  783699 command_runner.go:130] > # enable_tracing = false
	I1007 13:06:01.890732  783699 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1007 13:06:01.890739  783699 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1007 13:06:01.890746  783699 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1007 13:06:01.890752  783699 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1007 13:06:01.890756  783699 command_runner.go:130] > # CRI-O NRI configuration.
	I1007 13:06:01.890762  783699 command_runner.go:130] > [crio.nri]
	I1007 13:06:01.890767  783699 command_runner.go:130] > # Globally enable or disable NRI.
	I1007 13:06:01.890770  783699 command_runner.go:130] > # enable_nri = false
	I1007 13:06:01.890780  783699 command_runner.go:130] > # NRI socket to listen on.
	I1007 13:06:01.890788  783699 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1007 13:06:01.890793  783699 command_runner.go:130] > # NRI plugin directory to use.
	I1007 13:06:01.890797  783699 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1007 13:06:01.890808  783699 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1007 13:06:01.890812  783699 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1007 13:06:01.890820  783699 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1007 13:06:01.890824  783699 command_runner.go:130] > # nri_disable_connections = false
	I1007 13:06:01.890831  783699 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1007 13:06:01.890835  783699 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1007 13:06:01.890843  783699 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1007 13:06:01.890847  783699 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1007 13:06:01.890853  783699 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1007 13:06:01.890859  783699 command_runner.go:130] > [crio.stats]
	I1007 13:06:01.890865  783699 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1007 13:06:01.890872  783699 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1007 13:06:01.890876  783699 command_runner.go:130] > # stats_collection_period = 0
	I1007 13:06:01.890965  783699 cni.go:84] Creating CNI manager for ""
	I1007 13:06:01.890980  783699 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 13:06:01.891001  783699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:06:01.891025  783699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-723069 NodeName:multinode-723069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:06:01.891175  783699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-723069"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:06:01.891246  783699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:06:01.902940  783699 command_runner.go:130] > kubeadm
	I1007 13:06:01.902963  783699 command_runner.go:130] > kubectl
	I1007 13:06:01.902968  783699 command_runner.go:130] > kubelet
	I1007 13:06:01.902989  783699 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:06:01.903045  783699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:06:01.914351  783699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1007 13:06:01.933383  783699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:06:01.951954  783699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1007 13:06:01.970659  783699 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I1007 13:06:01.975034  783699 command_runner.go:130] > 192.168.39.213	control-plane.minikube.internal
	I1007 13:06:01.975123  783699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:02.121651  783699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:06:02.137295  783699 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069 for IP: 192.168.39.213
	I1007 13:06:02.137322  783699 certs.go:194] generating shared ca certs ...
	I1007 13:06:02.137344  783699 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:02.137544  783699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:06:02.137591  783699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:06:02.137605  783699 certs.go:256] generating profile certs ...
	I1007 13:06:02.137756  783699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/client.key
	I1007 13:06:02.137847  783699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.key.ae866860
	I1007 13:06:02.137905  783699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.key
	I1007 13:06:02.137922  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 13:06:02.137944  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 13:06:02.137962  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 13:06:02.137980  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 13:06:02.137999  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 13:06:02.138019  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 13:06:02.138052  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 13:06:02.138071  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 13:06:02.138138  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:06:02.138182  783699 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:06:02.138195  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:06:02.138233  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:06:02.138265  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:06:02.138291  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:06:02.138345  783699 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:06:02.138380  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.138400  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.138422  783699 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem -> /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.139249  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:06:02.165662  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:06:02.191751  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:06:02.219139  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:06:02.244926  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 13:06:02.269854  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:06:02.296107  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:06:02.330607  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/multinode-723069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:06:02.355127  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:06:02.379833  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:06:02.404853  783699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:06:02.431617  783699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:06:02.449998  783699 ssh_runner.go:195] Run: openssl version
	I1007 13:06:02.456466  783699 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1007 13:06:02.456561  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:06:02.468804  783699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.473481  783699 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.473526  783699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.473634  783699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:06:02.480836  783699 command_runner.go:130] > 51391683
	I1007 13:06:02.480987  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:06:02.491577  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:06:02.504102  783699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.508838  783699 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.508879  783699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.508925  783699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:06:02.514965  783699 command_runner.go:130] > 3ec20f2e
	I1007 13:06:02.515039  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:06:02.525449  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:06:02.538044  783699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.542787  783699 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.542825  783699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.542874  783699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:02.548589  783699 command_runner.go:130] > b5213941
	I1007 13:06:02.548681  783699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:06:02.558781  783699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:06:02.563432  783699 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:06:02.563468  783699 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1007 13:06:02.563478  783699 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I1007 13:06:02.563487  783699 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 13:06:02.563496  783699 command_runner.go:130] > Access: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563503  783699 command_runner.go:130] > Modify: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563514  783699 command_runner.go:130] > Change: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563522  783699 command_runner.go:130] >  Birth: 2024-10-07 12:59:19.314251783 +0000
	I1007 13:06:02.563588  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:06:02.569597  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.569695  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:06:02.575468  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.575546  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:06:02.581231  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.581473  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:06:02.587188  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.587283  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:06:02.593152  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.593241  783699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:06:02.599060  783699 command_runner.go:130] > Certificate will not expire
	I1007 13:06:02.599136  783699 kubeadm.go:392] StartCluster: {Name:multinode-723069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-723069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:06:02.599335  783699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:06:02.599399  783699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:06:02.635408  783699 command_runner.go:130] > 1b16ed582a3d903619a49d7a66ce8b1592c1282c8540cb0ad83c09b5f573e961
	I1007 13:06:02.635434  783699 command_runner.go:130] > a8f48750e5f44b1a9fe2de0f7ed356eb3f84295bd35ae219bd06360d48637746
	I1007 13:06:02.635440  783699 command_runner.go:130] > de662ec335094b1ce5b3decd4cbb85684fdb292c298d0b826103ec7b98d3c353
	I1007 13:06:02.635448  783699 command_runner.go:130] > e9481b7d15376c901357d20afc1775124be74711ae330834dc42c6d2af46217d
	I1007 13:06:02.635454  783699 command_runner.go:130] > 37eeaddff114d4a2699ef3faf7b619845f39755af2918424877c3367a13704c8
	I1007 13:06:02.635459  783699 command_runner.go:130] > 635171b305b41ba120abf85e1896029801be0ab931c67478482a8c57cee642f3
	I1007 13:06:02.635464  783699 command_runner.go:130] > 9f10303baf5bb3d0d6d815be9ccda86d5200b25e7bd0b377a537e845b4076093
	I1007 13:06:02.635472  783699 command_runner.go:130] > fc7b1afeb1b640dc162c7830189d8f6c1133dba31223b21ceb639ceabb3636e9
	I1007 13:06:02.636772  783699 cri.go:89] found id: "1b16ed582a3d903619a49d7a66ce8b1592c1282c8540cb0ad83c09b5f573e961"
	I1007 13:06:02.636789  783699 cri.go:89] found id: "a8f48750e5f44b1a9fe2de0f7ed356eb3f84295bd35ae219bd06360d48637746"
	I1007 13:06:02.636793  783699 cri.go:89] found id: "de662ec335094b1ce5b3decd4cbb85684fdb292c298d0b826103ec7b98d3c353"
	I1007 13:06:02.636796  783699 cri.go:89] found id: "e9481b7d15376c901357d20afc1775124be74711ae330834dc42c6d2af46217d"
	I1007 13:06:02.636800  783699 cri.go:89] found id: "37eeaddff114d4a2699ef3faf7b619845f39755af2918424877c3367a13704c8"
	I1007 13:06:02.636804  783699 cri.go:89] found id: "635171b305b41ba120abf85e1896029801be0ab931c67478482a8c57cee642f3"
	I1007 13:06:02.636807  783699 cri.go:89] found id: "9f10303baf5bb3d0d6d815be9ccda86d5200b25e7bd0b377a537e845b4076093"
	I1007 13:06:02.636809  783699 cri.go:89] found id: "fc7b1afeb1b640dc162c7830189d8f6c1133dba31223b21ceb639ceabb3636e9"
	I1007 13:06:02.636811  783699 cri.go:89] found id: ""
	I1007 13:06:02.636867  783699 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-723069 -n multinode-723069
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-723069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.50s)

                                                
                                    
x
+
TestPreload (164.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-438901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1007 13:14:53.450346  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:15:13.698474  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-438901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.98884196s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-438901 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-438901 image pull gcr.io/k8s-minikube/busybox: (2.314413251s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-438901
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-438901: (7.311112576s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-438901 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-438901 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.185199866s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-438901 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-07 13:17:01.982628767 +0000 UTC m=+4155.181168749
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-438901 -n test-preload-438901
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-438901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-438901 logs -n 25: (1.131212934s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069 sudo cat                                       | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m03_multinode-723069.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt                       | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m02:/home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n                                                                 | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | multinode-723069-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-723069 ssh -n multinode-723069-m02 sudo cat                                   | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-723069 node stop m03                                                          | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:01 UTC |
	| node    | multinode-723069 node start                                                             | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:01 UTC | 07 Oct 24 13:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	| stop    | -p multinode-723069                                                                     | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	| start   | -p multinode-723069                                                                     | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:04 UTC | 07 Oct 24 13:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC |                     |
	| node    | multinode-723069 node delete                                                            | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC | 07 Oct 24 13:07 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-723069 stop                                                                   | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC |                     |
	| start   | -p multinode-723069                                                                     | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:10 UTC | 07 Oct 24 13:13 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-723069                                                                | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:13 UTC |                     |
	| start   | -p multinode-723069-m02                                                                 | multinode-723069-m02 | jenkins | v1.34.0 | 07 Oct 24 13:13 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-723069-m03                                                                 | multinode-723069-m03 | jenkins | v1.34.0 | 07 Oct 24 13:13 UTC | 07 Oct 24 13:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-723069                                                                 | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:14 UTC |                     |
	| delete  | -p multinode-723069-m03                                                                 | multinode-723069-m03 | jenkins | v1.34.0 | 07 Oct 24 13:14 UTC | 07 Oct 24 13:14 UTC |
	| delete  | -p multinode-723069                                                                     | multinode-723069     | jenkins | v1.34.0 | 07 Oct 24 13:14 UTC | 07 Oct 24 13:14 UTC |
	| start   | -p test-preload-438901                                                                  | test-preload-438901  | jenkins | v1.34.0 | 07 Oct 24 13:14 UTC | 07 Oct 24 13:15 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-438901 image pull                                                          | test-preload-438901  | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-438901                                                                  | test-preload-438901  | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:16 UTC |
	| start   | -p test-preload-438901                                                                  | test-preload-438901  | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-438901 image list                                                          | test-preload-438901  | jenkins | v1.34.0 | 07 Oct 24 13:17 UTC | 07 Oct 24 13:17 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:16:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:16:01.620185  788035 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:16:01.620411  788035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:16:01.620419  788035 out.go:358] Setting ErrFile to fd 2...
	I1007 13:16:01.620424  788035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:16:01.620591  788035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:16:01.621162  788035 out.go:352] Setting JSON to false
	I1007 13:16:01.622266  788035 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10711,"bootTime":1728296251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:16:01.622336  788035 start.go:139] virtualization: kvm guest
	I1007 13:16:01.625524  788035 out.go:177] * [test-preload-438901] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:16:01.626974  788035 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:16:01.627015  788035 notify.go:220] Checking for updates...
	I1007 13:16:01.629305  788035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:16:01.630856  788035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:16:01.632434  788035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:16:01.633967  788035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:16:01.635342  788035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:16:01.637304  788035 config.go:182] Loaded profile config "test-preload-438901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1007 13:16:01.638013  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:01.638093  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:01.653379  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I1007 13:16:01.653864  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:01.654523  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:01.654554  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:01.654934  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:01.655192  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:01.657279  788035 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 13:16:01.658536  788035 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:16:01.658868  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:01.658916  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:01.674222  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I1007 13:16:01.674746  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:01.675296  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:01.675320  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:01.675691  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:01.675903  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:01.712795  788035 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:16:01.714471  788035 start.go:297] selected driver: kvm2
	I1007 13:16:01.714494  788035 start.go:901] validating driver "kvm2" against &{Name:test-preload-438901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.24.4 ClusterName:test-preload-438901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:16:01.714636  788035 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:16:01.715473  788035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:16:01.715572  788035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:16:01.731412  788035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:16:01.731809  788035 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:16:01.731849  788035 cni.go:84] Creating CNI manager for ""
	I1007 13:16:01.731910  788035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:16:01.731975  788035 start.go:340] cluster config:
	{Name:test-preload-438901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-438901 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:16:01.732121  788035 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:16:01.734075  788035 out.go:177] * Starting "test-preload-438901" primary control-plane node in "test-preload-438901" cluster
	I1007 13:16:01.735284  788035 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1007 13:16:01.757623  788035 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1007 13:16:01.757653  788035 cache.go:56] Caching tarball of preloaded images
	I1007 13:16:01.757890  788035 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1007 13:16:01.759747  788035 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1007 13:16:01.760902  788035 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1007 13:16:01.784022  788035 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1007 13:16:04.944097  788035 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1007 13:16:04.944219  788035 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1007 13:16:05.818454  788035 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1007 13:16:05.818595  788035 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/config.json ...
	I1007 13:16:05.818841  788035 start.go:360] acquireMachinesLock for test-preload-438901: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:16:05.818912  788035 start.go:364] duration metric: took 47.231µs to acquireMachinesLock for "test-preload-438901"
	I1007 13:16:05.818928  788035 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:16:05.818933  788035 fix.go:54] fixHost starting: 
	I1007 13:16:05.819205  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:05.819244  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:05.834054  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44633
	I1007 13:16:05.834596  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:05.835055  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:05.835077  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:05.835415  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:05.835633  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:05.835801  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetState
	I1007 13:16:05.837445  788035 fix.go:112] recreateIfNeeded on test-preload-438901: state=Stopped err=<nil>
	I1007 13:16:05.837480  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	W1007 13:16:05.837663  788035 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:16:05.839869  788035 out.go:177] * Restarting existing kvm2 VM for "test-preload-438901" ...
	I1007 13:16:05.841213  788035 main.go:141] libmachine: (test-preload-438901) Calling .Start
	I1007 13:16:05.841391  788035 main.go:141] libmachine: (test-preload-438901) Ensuring networks are active...
	I1007 13:16:05.842192  788035 main.go:141] libmachine: (test-preload-438901) Ensuring network default is active
	I1007 13:16:05.842512  788035 main.go:141] libmachine: (test-preload-438901) Ensuring network mk-test-preload-438901 is active
	I1007 13:16:05.842843  788035 main.go:141] libmachine: (test-preload-438901) Getting domain xml...
	I1007 13:16:05.843554  788035 main.go:141] libmachine: (test-preload-438901) Creating domain...
	I1007 13:16:06.182249  788035 main.go:141] libmachine: (test-preload-438901) Waiting to get IP...
	I1007 13:16:06.183089  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:06.183521  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:06.183594  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:06.183514  788087 retry.go:31] will retry after 192.360479ms: waiting for machine to come up
	I1007 13:16:06.378121  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:06.378547  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:06.378580  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:06.378491  788087 retry.go:31] will retry after 287.930626ms: waiting for machine to come up
	I1007 13:16:06.668069  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:06.668528  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:06.668552  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:06.668486  788087 retry.go:31] will retry after 326.226234ms: waiting for machine to come up
	I1007 13:16:06.995988  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:06.996516  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:06.996539  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:06.996456  788087 retry.go:31] will retry after 393.723503ms: waiting for machine to come up
	I1007 13:16:07.392069  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:07.392588  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:07.392616  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:07.392554  788087 retry.go:31] will retry after 721.604188ms: waiting for machine to come up
	I1007 13:16:08.115486  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:08.116021  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:08.116050  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:08.115949  788087 retry.go:31] will retry after 793.14581ms: waiting for machine to come up
	I1007 13:16:08.910998  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:08.911403  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:08.911437  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:08.911347  788087 retry.go:31] will retry after 928.50162ms: waiting for machine to come up
	I1007 13:16:09.841160  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:09.841628  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:09.841655  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:09.841572  788087 retry.go:31] will retry after 919.315171ms: waiting for machine to come up
	I1007 13:16:10.762783  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:10.763179  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:10.763210  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:10.763119  788087 retry.go:31] will retry after 1.271170068s: waiting for machine to come up
	I1007 13:16:12.036024  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:12.036382  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:12.036405  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:12.036330  788087 retry.go:31] will retry after 1.897715194s: waiting for machine to come up
	I1007 13:16:13.936217  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:13.936636  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:13.936672  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:13.936574  788087 retry.go:31] will retry after 2.519866588s: waiting for machine to come up
	I1007 13:16:16.457793  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:16.458239  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:16.458270  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:16.458176  788087 retry.go:31] will retry after 2.45177616s: waiting for machine to come up
	I1007 13:16:18.912879  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:18.913478  788035 main.go:141] libmachine: (test-preload-438901) DBG | unable to find current IP address of domain test-preload-438901 in network mk-test-preload-438901
	I1007 13:16:18.913511  788035 main.go:141] libmachine: (test-preload-438901) DBG | I1007 13:16:18.913418  788087 retry.go:31] will retry after 4.112169596s: waiting for machine to come up
	I1007 13:16:23.028314  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.028938  788035 main.go:141] libmachine: (test-preload-438901) Found IP for machine: 192.168.39.238
	I1007 13:16:23.028960  788035 main.go:141] libmachine: (test-preload-438901) Reserving static IP address...
	I1007 13:16:23.029040  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has current primary IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.029402  788035 main.go:141] libmachine: (test-preload-438901) Reserved static IP address: 192.168.39.238
	I1007 13:16:23.029428  788035 main.go:141] libmachine: (test-preload-438901) Waiting for SSH to be available...
	I1007 13:16:23.029453  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "test-preload-438901", mac: "52:54:00:99:c9:a2", ip: "192.168.39.238"} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.029478  788035 main.go:141] libmachine: (test-preload-438901) DBG | skip adding static IP to network mk-test-preload-438901 - found existing host DHCP lease matching {name: "test-preload-438901", mac: "52:54:00:99:c9:a2", ip: "192.168.39.238"}
	I1007 13:16:23.029495  788035 main.go:141] libmachine: (test-preload-438901) DBG | Getting to WaitForSSH function...
	I1007 13:16:23.031906  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.032314  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.032348  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.032513  788035 main.go:141] libmachine: (test-preload-438901) DBG | Using SSH client type: external
	I1007 13:16:23.032530  788035 main.go:141] libmachine: (test-preload-438901) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa (-rw-------)
	I1007 13:16:23.032549  788035 main.go:141] libmachine: (test-preload-438901) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:16:23.032559  788035 main.go:141] libmachine: (test-preload-438901) DBG | About to run SSH command:
	I1007 13:16:23.032572  788035 main.go:141] libmachine: (test-preload-438901) DBG | exit 0
	I1007 13:16:23.154197  788035 main.go:141] libmachine: (test-preload-438901) DBG | SSH cmd err, output: <nil>: 
	I1007 13:16:23.154618  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetConfigRaw
	I1007 13:16:23.155318  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetIP
	I1007 13:16:23.158066  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.158434  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.158467  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.158694  788035 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/config.json ...
	I1007 13:16:23.158953  788035 machine.go:93] provisionDockerMachine start ...
	I1007 13:16:23.158974  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:23.159205  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:23.161658  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.162005  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.162050  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.162210  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:23.162403  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.162557  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.162753  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:23.162906  788035 main.go:141] libmachine: Using SSH client type: native
	I1007 13:16:23.163128  788035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1007 13:16:23.163140  788035 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:16:23.267044  788035 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:16:23.267078  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetMachineName
	I1007 13:16:23.267326  788035 buildroot.go:166] provisioning hostname "test-preload-438901"
	I1007 13:16:23.267364  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetMachineName
	I1007 13:16:23.267582  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:23.270524  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.271031  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.271066  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.271237  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:23.271498  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.271694  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.271884  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:23.272099  788035 main.go:141] libmachine: Using SSH client type: native
	I1007 13:16:23.272281  788035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1007 13:16:23.272293  788035 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-438901 && echo "test-preload-438901" | sudo tee /etc/hostname
	I1007 13:16:23.389516  788035 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-438901
	
	I1007 13:16:23.389551  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:23.392639  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.393081  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.393120  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.393332  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:23.393526  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.393655  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.393772  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:23.393892  788035 main.go:141] libmachine: Using SSH client type: native
	I1007 13:16:23.394089  788035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1007 13:16:23.394107  788035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-438901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-438901/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-438901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:16:23.503638  788035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:16:23.503675  788035 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:16:23.503725  788035 buildroot.go:174] setting up certificates
	I1007 13:16:23.503737  788035 provision.go:84] configureAuth start
	I1007 13:16:23.503747  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetMachineName
	I1007 13:16:23.504055  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetIP
	I1007 13:16:23.506642  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.507006  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.507037  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.507200  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:23.509409  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.509715  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.509742  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.509903  788035 provision.go:143] copyHostCerts
	I1007 13:16:23.509985  788035 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:16:23.510020  788035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:16:23.510121  788035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:16:23.510263  788035 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:16:23.510277  788035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:16:23.510315  788035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:16:23.510395  788035 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:16:23.510406  788035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:16:23.510437  788035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:16:23.510504  788035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.test-preload-438901 san=[127.0.0.1 192.168.39.238 localhost minikube test-preload-438901]
	I1007 13:16:23.701920  788035 provision.go:177] copyRemoteCerts
	I1007 13:16:23.701980  788035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:16:23.702013  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:23.705073  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.705455  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.705494  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.705687  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:23.705918  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.706083  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:23.706225  788035 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa Username:docker}
	I1007 13:16:23.790498  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:16:23.816223  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 13:16:23.841160  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:16:23.866499  788035 provision.go:87] duration metric: took 362.747381ms to configureAuth
	I1007 13:16:23.866532  788035 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:16:23.866703  788035 config.go:182] Loaded profile config "test-preload-438901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1007 13:16:23.866841  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:23.869683  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.870114  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:23.870145  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:23.870322  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:23.870498  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.870684  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:23.870821  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:23.871007  788035 main.go:141] libmachine: Using SSH client type: native
	I1007 13:16:23.871174  788035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1007 13:16:23.871187  788035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:16:24.102142  788035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:16:24.102177  788035 machine.go:96] duration metric: took 943.208456ms to provisionDockerMachine
	I1007 13:16:24.102194  788035 start.go:293] postStartSetup for "test-preload-438901" (driver="kvm2")
	I1007 13:16:24.102208  788035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:16:24.102243  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:24.102608  788035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:16:24.102648  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:24.105869  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.106299  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:24.106332  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.106489  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:24.106723  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:24.106955  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:24.107170  788035 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa Username:docker}
	I1007 13:16:24.189706  788035 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:16:24.193935  788035 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:16:24.193959  788035 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:16:24.194058  788035 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:16:24.194130  788035 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:16:24.194225  788035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:16:24.203988  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:16:24.228529  788035 start.go:296] duration metric: took 126.316158ms for postStartSetup
	I1007 13:16:24.228580  788035 fix.go:56] duration metric: took 18.409646202s for fixHost
	I1007 13:16:24.228603  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:24.231220  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.231598  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:24.231630  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.231785  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:24.232006  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:24.232152  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:24.232296  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:24.232434  788035 main.go:141] libmachine: Using SSH client type: native
	I1007 13:16:24.232607  788035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1007 13:16:24.232617  788035 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:16:24.335144  788035 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728306984.291231881
	
	I1007 13:16:24.335175  788035 fix.go:216] guest clock: 1728306984.291231881
	I1007 13:16:24.335186  788035 fix.go:229] Guest: 2024-10-07 13:16:24.291231881 +0000 UTC Remote: 2024-10-07 13:16:24.22858419 +0000 UTC m=+22.648920925 (delta=62.647691ms)
	I1007 13:16:24.335240  788035 fix.go:200] guest clock delta is within tolerance: 62.647691ms
	I1007 13:16:24.335246  788035 start.go:83] releasing machines lock for "test-preload-438901", held for 18.516324554s
	I1007 13:16:24.335268  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:24.335579  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetIP
	I1007 13:16:24.338632  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.338920  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:24.338947  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.339135  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:24.339714  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:24.339903  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:24.340060  788035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:16:24.340104  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:24.340187  788035 ssh_runner.go:195] Run: cat /version.json
	I1007 13:16:24.340216  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:24.343086  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.343200  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.343453  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:24.343477  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.343541  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:24.343564  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:24.343612  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:24.343817  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:24.343896  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:24.343971  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:24.344102  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:24.344108  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:24.344281  788035 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa Username:docker}
	I1007 13:16:24.344288  788035 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa Username:docker}
	I1007 13:16:24.441827  788035 ssh_runner.go:195] Run: systemctl --version
	I1007 13:16:24.448012  788035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:16:24.589564  788035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:16:24.597106  788035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:16:24.597196  788035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:16:24.615148  788035 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:16:24.615182  788035 start.go:495] detecting cgroup driver to use...
	I1007 13:16:24.615267  788035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:16:24.632743  788035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:16:24.647609  788035 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:16:24.647685  788035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:16:24.662598  788035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:16:24.677120  788035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:16:24.791493  788035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:16:24.964748  788035 docker.go:233] disabling docker service ...
	I1007 13:16:24.964831  788035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:16:24.979917  788035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:16:24.994625  788035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:16:25.113435  788035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:16:25.236260  788035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:16:25.252830  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:16:25.273965  788035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1007 13:16:25.274061  788035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.286090  788035 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:16:25.286162  788035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.298230  788035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.310177  788035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.321968  788035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:16:25.334239  788035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.346014  788035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.365126  788035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:16:25.377060  788035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:16:25.388276  788035 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:16:25.388347  788035 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:16:25.403902  788035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:16:25.416208  788035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:16:25.536968  788035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:16:25.634538  788035 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:16:25.634624  788035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:16:25.640057  788035 start.go:563] Will wait 60s for crictl version
	I1007 13:16:25.640121  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:25.644114  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:16:25.686560  788035 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:16:25.686650  788035 ssh_runner.go:195] Run: crio --version
	I1007 13:16:25.717029  788035 ssh_runner.go:195] Run: crio --version
	I1007 13:16:25.751675  788035 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1007 13:16:25.753166  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetIP
	I1007 13:16:25.755872  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:25.756212  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:25.756236  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:25.756527  788035 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 13:16:25.761220  788035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:16:25.775675  788035 kubeadm.go:883] updating cluster {Name:test-preload-438901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.24.4 ClusterName:test-preload-438901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:16:25.775786  788035 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1007 13:16:25.775840  788035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:16:25.813698  788035 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1007 13:16:25.813780  788035 ssh_runner.go:195] Run: which lz4
	I1007 13:16:25.818079  788035 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:16:25.822696  788035 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:16:25.822741  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1007 13:16:27.473740  788035 crio.go:462] duration metric: took 1.655708423s to copy over tarball
	I1007 13:16:27.473815  788035 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:16:29.944207  788035 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.470357414s)
	I1007 13:16:29.944244  788035 crio.go:469] duration metric: took 2.470471881s to extract the tarball
	I1007 13:16:29.944254  788035 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:16:29.985617  788035 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:16:30.031903  788035 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1007 13:16:30.031930  788035 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 13:16:30.031995  788035 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:16:30.032026  788035 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.032047  788035 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.032087  788035 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.032104  788035 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.032120  788035 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 13:16:30.032148  788035 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.032101  788035 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.033688  788035 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.033699  788035 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.033699  788035 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.033692  788035 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.033694  788035 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.033694  788035 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.033741  788035 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:16:30.033688  788035 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 13:16:30.192967  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.198249  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.205274  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.250105  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.278749  788035 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1007 13:16:30.278809  788035 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.278749  788035 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1007 13:16:30.278868  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.278904  788035 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.278973  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.297662  788035 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1007 13:16:30.297714  788035 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.297760  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.312435  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.313774  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.318006  788035 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1007 13:16:30.318065  788035 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.318121  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.318127  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.318149  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.318221  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.338676  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 13:16:30.433976  788035 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1007 13:16:30.434035  788035 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.434268  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.441194  788035 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1007 13:16:30.441241  788035 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.441293  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.459512  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.459625  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.459660  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.459724  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.484049  788035 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1007 13:16:30.484119  788035 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1007 13:16:30.484136  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.484163  788035 ssh_runner.go:195] Run: which crictl
	I1007 13:16:30.484202  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.605565  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.605630  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 13:16:30.605709  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 13:16:30.605775  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1007 13:16:30.634352  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.635642  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1007 13:16:30.635778  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.752164  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 13:16:30.752244  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1007 13:16:30.752286  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 13:16:30.752334  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1007 13:16:30.752342  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1007 13:16:30.752352  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1007 13:16:30.752414  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1007 13:16:30.805533  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1007 13:16:30.805565  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1007 13:16:30.805638  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1007 13:16:30.805656  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1007 13:16:30.805666  788035 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1007 13:16:30.805708  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1007 13:16:30.805756  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1007 13:16:30.866249  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1007 13:16:30.866313  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1007 13:16:30.866420  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1007 13:16:30.905004  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1007 13:16:30.905128  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1007 13:16:30.908919  788035 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:16:34.653993  788035 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.848252958s)
	I1007 13:16:34.654075  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1007 13:16:34.654069  788035 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.848472831s)
	I1007 13:16:34.654114  788035 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1007 13:16:34.654129  788035 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.848468071s)
	I1007 13:16:34.654139  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1007 13:16:34.654186  788035 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1007 13:16:34.654240  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1007 13:16:34.654255  788035 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.749111409s)
	I1007 13:16:34.654186  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1007 13:16:34.654280  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1007 13:16:34.654195  788035 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.787756368s)
	I1007 13:16:34.654316  788035 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.745364516s)
	I1007 13:16:34.654324  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1007 13:16:34.703472  788035 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1007 13:16:34.703589  788035 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 13:16:35.114449  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1007 13:16:35.114491  788035 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 13:16:35.114546  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1007 13:16:35.114563  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1007 13:16:35.114624  788035 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1007 13:16:35.565465  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 13:16:35.565526  788035 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1007 13:16:35.565583  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1007 13:16:36.309283  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1007 13:16:36.309346  788035 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1007 13:16:36.309435  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1007 13:16:37.156862  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1007 13:16:37.156923  788035 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 13:16:37.156997  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1007 13:16:39.312125  788035 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.15509904s)
	I1007 13:16:39.312155  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 13:16:39.312184  788035 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 13:16:39.312248  788035 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1007 13:16:39.462110  788035 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1007 13:16:39.462161  788035 cache_images.go:123] Successfully loaded all cached images
	I1007 13:16:39.462166  788035 cache_images.go:92] duration metric: took 9.430225243s to LoadCachedImages
	I1007 13:16:39.462180  788035 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.24.4 crio true true} ...
	I1007 13:16:39.462285  788035 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-438901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-438901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:16:39.462364  788035 ssh_runner.go:195] Run: crio config
	I1007 13:16:39.516814  788035 cni.go:84] Creating CNI manager for ""
	I1007 13:16:39.516838  788035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:16:39.516849  788035 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:16:39.516867  788035 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-438901 NodeName:test-preload-438901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:16:39.517000  788035 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-438901"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:16:39.517072  788035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1007 13:16:39.528340  788035 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:16:39.528426  788035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:16:39.539918  788035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1007 13:16:39.558643  788035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:16:39.576865  788035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1007 13:16:39.596257  788035 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I1007 13:16:39.600543  788035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:16:39.614580  788035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:16:39.725928  788035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:16:39.744510  788035 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901 for IP: 192.168.39.238
	I1007 13:16:39.744538  788035 certs.go:194] generating shared ca certs ...
	I1007 13:16:39.744555  788035 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:16:39.744706  788035 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:16:39.744766  788035 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:16:39.744778  788035 certs.go:256] generating profile certs ...
	I1007 13:16:39.744866  788035 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/client.key
	I1007 13:16:39.744924  788035 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/apiserver.key.212ac66b
	I1007 13:16:39.744970  788035 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/proxy-client.key
	I1007 13:16:39.745082  788035 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:16:39.745113  788035 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:16:39.745123  788035 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:16:39.745143  788035 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:16:39.745163  788035 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:16:39.745183  788035 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:16:39.745219  788035 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:16:39.745904  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:16:39.783513  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:16:39.818499  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:16:39.853265  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:16:39.893351  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1007 13:16:39.927494  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:16:39.963120  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:16:39.990247  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:16:40.017368  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:16:40.043445  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:16:40.069262  788035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:16:40.095113  788035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:16:40.113721  788035 ssh_runner.go:195] Run: openssl version
	I1007 13:16:40.119819  788035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:16:40.132466  788035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:16:40.137630  788035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:16:40.137703  788035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:16:40.144003  788035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:16:40.156146  788035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:16:40.168683  788035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:16:40.173787  788035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:16:40.173857  788035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:16:40.180126  788035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:16:40.192126  788035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:16:40.204269  788035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:16:40.209760  788035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:16:40.209833  788035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:16:40.216361  788035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:16:40.228939  788035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:16:40.233953  788035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:16:40.240036  788035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:16:40.246245  788035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:16:40.252575  788035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:16:40.258579  788035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:16:40.264820  788035 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:16:40.270812  788035 kubeadm.go:392] StartCluster: {Name:test-preload-438901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.24.4 ClusterName:test-preload-438901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:16:40.270935  788035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:16:40.270995  788035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:16:40.313999  788035 cri.go:89] found id: ""
	I1007 13:16:40.314109  788035 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:16:40.326863  788035 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:16:40.326889  788035 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:16:40.326934  788035 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:16:40.339326  788035 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:16:40.339808  788035 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-438901" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:16:40.339924  788035 kubeconfig.go:62] /home/jenkins/minikube-integration/18424-747025/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-438901" cluster setting kubeconfig missing "test-preload-438901" context setting]
	I1007 13:16:40.340211  788035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:16:40.340861  788035 kapi.go:59] client config for test-preload-438901: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 13:16:40.341536  788035 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:16:40.353941  788035 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I1007 13:16:40.353989  788035 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:16:40.354002  788035 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:16:40.354114  788035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:16:40.398403  788035 cri.go:89] found id: ""
	I1007 13:16:40.398499  788035 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:16:40.416193  788035 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:16:40.427566  788035 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:16:40.427591  788035 kubeadm.go:157] found existing configuration files:
	
	I1007 13:16:40.427641  788035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:16:40.438009  788035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:16:40.438104  788035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:16:40.449059  788035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:16:40.459975  788035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:16:40.460055  788035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:16:40.470987  788035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:16:40.481373  788035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:16:40.481437  788035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:16:40.492137  788035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:16:40.504336  788035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:16:40.504474  788035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:16:40.516789  788035 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:16:40.528026  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:16:40.632145  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:16:41.736829  788035 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.104641033s)
	I1007 13:16:41.736885  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:16:42.005690  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:16:42.089238  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:16:42.162823  788035 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:16:42.162922  788035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:16:42.663959  788035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:16:43.163162  788035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:16:43.182375  788035 api_server.go:72] duration metric: took 1.019563377s to wait for apiserver process to appear ...
	I1007 13:16:43.182403  788035 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:16:43.182423  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:43.183004  788035 api_server.go:269] stopped: https://192.168.39.238:8443/healthz: Get "https://192.168.39.238:8443/healthz": dial tcp 192.168.39.238:8443: connect: connection refused
	I1007 13:16:43.683208  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:47.602215  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:16:47.602260  788035 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:16:47.602275  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:47.620330  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:16:47.620369  788035 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:16:47.683552  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:47.695362  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:16:47.695398  788035 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:16:48.182931  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:48.191961  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:16:48.192005  788035 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:16:48.683265  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:48.695521  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:16:48.695559  788035 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:16:49.183179  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:16:49.189032  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I1007 13:16:49.197706  788035 api_server.go:141] control plane version: v1.24.4
	I1007 13:16:49.197749  788035 api_server.go:131] duration metric: took 6.015339622s to wait for apiserver health ...
	I1007 13:16:49.197760  788035 cni.go:84] Creating CNI manager for ""
	I1007 13:16:49.197767  788035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:16:49.199910  788035 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:16:49.201672  788035 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:16:49.221522  788035 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:16:49.246343  788035 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:16:49.246436  788035 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 13:16:49.246452  788035 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 13:16:49.256498  788035 system_pods.go:59] 8 kube-system pods found
	I1007 13:16:49.256538  788035 system_pods.go:61] "coredns-6d4b75cb6d-lmc97" [e7a37ec8-8247-47b9-a89c-c31758f8941e] Running
	I1007 13:16:49.256546  788035 system_pods.go:61] "coredns-6d4b75cb6d-wpmdb" [b22c80db-6231-4394-b156-1957540d916b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:16:49.256552  788035 system_pods.go:61] "etcd-test-preload-438901" [b711ddca-a452-48c9-bc40-03766d637ad5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:16:49.256558  788035 system_pods.go:61] "kube-apiserver-test-preload-438901" [254cb8fc-94cb-4380-9ddf-b2d406bcf9fe] Running
	I1007 13:16:49.256564  788035 system_pods.go:61] "kube-controller-manager-test-preload-438901" [2bb988d7-9b6c-4dec-adf5-85ac50e40b77] Running
	I1007 13:16:49.256570  788035 system_pods.go:61] "kube-proxy-flkxg" [459352ca-053f-4f04-8b0b-7f9595171594] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 13:16:49.256575  788035 system_pods.go:61] "kube-scheduler-test-preload-438901" [f3c1f95e-c2e8-4702-a07b-8ba1b8d8b09d] Running
	I1007 13:16:49.256581  788035 system_pods.go:61] "storage-provisioner" [19e4864a-dceb-4582-9640-59e53fb889a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1007 13:16:49.256590  788035 system_pods.go:74] duration metric: took 10.220516ms to wait for pod list to return data ...
	I1007 13:16:49.256603  788035 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:16:49.261743  788035 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:16:49.261797  788035 node_conditions.go:123] node cpu capacity is 2
	I1007 13:16:49.261813  788035 node_conditions.go:105] duration metric: took 5.20373ms to run NodePressure ...
	I1007 13:16:49.261852  788035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:16:49.538872  788035 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 13:16:49.545414  788035 kubeadm.go:739] kubelet initialised
	I1007 13:16:49.545438  788035 kubeadm.go:740] duration metric: took 6.541299ms waiting for restarted kubelet to initialise ...
	I1007 13:16:49.545448  788035 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:16:49.551976  788035 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-lmc97" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:49.559662  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "coredns-6d4b75cb6d-lmc97" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.559687  788035 pod_ready.go:82] duration metric: took 7.674216ms for pod "coredns-6d4b75cb6d-lmc97" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:49.559696  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "coredns-6d4b75cb6d-lmc97" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.559702  788035 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:49.572934  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.572969  788035 pod_ready.go:82] duration metric: took 13.258388ms for pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:49.572982  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.572990  788035 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:49.585076  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "etcd-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.585114  788035 pod_ready.go:82] duration metric: took 12.112551ms for pod "etcd-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:49.585128  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "etcd-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.585137  788035 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:49.651193  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "kube-apiserver-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.651222  788035 pod_ready.go:82] duration metric: took 66.073978ms for pod "kube-apiserver-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:49.651233  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "kube-apiserver-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:49.651241  788035 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:50.050008  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:50.050050  788035 pod_ready.go:82] duration metric: took 398.798262ms for pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:50.050064  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:50.050075  788035 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-flkxg" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:50.451927  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "kube-proxy-flkxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:50.451970  788035 pod_ready.go:82] duration metric: took 401.883881ms for pod "kube-proxy-flkxg" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:50.451982  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "kube-proxy-flkxg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:50.451989  788035 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:50.850197  788035 pod_ready.go:98] node "test-preload-438901" hosting pod "kube-scheduler-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:50.850225  788035 pod_ready.go:82] duration metric: took 398.229102ms for pod "kube-scheduler-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	E1007 13:16:50.850235  788035 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-438901" hosting pod "kube-scheduler-test-preload-438901" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:50.850243  788035 pod_ready.go:39] duration metric: took 1.30478351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:16:50.850270  788035 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:16:50.862737  788035 ops.go:34] apiserver oom_adj: -16
	I1007 13:16:50.862765  788035 kubeadm.go:597] duration metric: took 10.535871059s to restartPrimaryControlPlane
	I1007 13:16:50.862776  788035 kubeadm.go:394] duration metric: took 10.591976966s to StartCluster
	I1007 13:16:50.862794  788035 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:16:50.862866  788035 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:16:50.863523  788035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:16:50.863764  788035 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:16:50.863831  788035 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:16:50.863937  788035 addons.go:69] Setting storage-provisioner=true in profile "test-preload-438901"
	I1007 13:16:50.863956  788035 addons.go:234] Setting addon storage-provisioner=true in "test-preload-438901"
	W1007 13:16:50.863965  788035 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:16:50.863963  788035 addons.go:69] Setting default-storageclass=true in profile "test-preload-438901"
	I1007 13:16:50.863998  788035 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-438901"
	I1007 13:16:50.864034  788035 config.go:182] Loaded profile config "test-preload-438901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1007 13:16:50.864000  788035 host.go:66] Checking if "test-preload-438901" exists ...
	I1007 13:16:50.864519  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:50.864519  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:50.864564  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:50.864570  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:50.865486  788035 out.go:177] * Verifying Kubernetes components...
	I1007 13:16:50.866887  788035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:16:50.880294  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
	I1007 13:16:50.880343  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I1007 13:16:50.880797  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:50.880936  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:50.881320  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:50.881341  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:50.881436  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:50.881458  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:50.881698  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:50.881789  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:50.881949  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetState
	I1007 13:16:50.882248  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:50.882285  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:50.884328  788035 kapi.go:59] client config for test-preload-438901: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/profiles/test-preload-438901/client.key", CAFile:"/home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 13:16:50.884672  788035 addons.go:234] Setting addon default-storageclass=true in "test-preload-438901"
	W1007 13:16:50.884688  788035 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:16:50.884718  788035 host.go:66] Checking if "test-preload-438901" exists ...
	I1007 13:16:50.885086  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:50.885144  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:50.897786  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I1007 13:16:50.898397  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:50.898964  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:50.899006  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:50.899374  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:50.899579  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetState
	I1007 13:16:50.900234  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I1007 13:16:50.900756  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:50.901300  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:50.901326  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:50.901350  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:50.901654  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:50.902207  788035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:16:50.902256  788035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:16:50.903434  788035 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:16:50.904842  788035 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:16:50.904866  788035 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:16:50.904884  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:50.907762  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:50.908244  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:50.908274  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:50.908417  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:50.908617  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:50.908782  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:50.908937  788035 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa Username:docker}
	I1007 13:16:50.946522  788035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1007 13:16:50.947141  788035 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:16:50.947699  788035 main.go:141] libmachine: Using API Version  1
	I1007 13:16:50.947725  788035 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:16:50.948144  788035 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:16:50.948360  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetState
	I1007 13:16:50.949968  788035 main.go:141] libmachine: (test-preload-438901) Calling .DriverName
	I1007 13:16:50.950238  788035 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:16:50.950254  788035 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:16:50.950274  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHHostname
	I1007 13:16:50.953165  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:50.953596  788035 main.go:141] libmachine: (test-preload-438901) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:c9:a2", ip: ""} in network mk-test-preload-438901: {Iface:virbr1 ExpiryTime:2024-10-07 14:16:16 +0000 UTC Type:0 Mac:52:54:00:99:c9:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-438901 Clientid:01:52:54:00:99:c9:a2}
	I1007 13:16:50.953627  788035 main.go:141] libmachine: (test-preload-438901) DBG | domain test-preload-438901 has defined IP address 192.168.39.238 and MAC address 52:54:00:99:c9:a2 in network mk-test-preload-438901
	I1007 13:16:50.953767  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHPort
	I1007 13:16:50.953973  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHKeyPath
	I1007 13:16:50.954169  788035 main.go:141] libmachine: (test-preload-438901) Calling .GetSSHUsername
	I1007 13:16:50.954390  788035 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/test-preload-438901/id_rsa Username:docker}
	I1007 13:16:51.044001  788035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:16:51.063360  788035 node_ready.go:35] waiting up to 6m0s for node "test-preload-438901" to be "Ready" ...
	I1007 13:16:51.153218  788035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:16:51.232552  788035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:16:52.186843  788035 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.033588985s)
	I1007 13:16:52.186902  788035 main.go:141] libmachine: Making call to close driver server
	I1007 13:16:52.186913  788035 main.go:141] libmachine: (test-preload-438901) Calling .Close
	I1007 13:16:52.187220  788035 main.go:141] libmachine: (test-preload-438901) DBG | Closing plugin on server side
	I1007 13:16:52.187245  788035 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:16:52.187258  788035 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:16:52.187275  788035 main.go:141] libmachine: Making call to close driver server
	I1007 13:16:52.187287  788035 main.go:141] libmachine: (test-preload-438901) Calling .Close
	I1007 13:16:52.187581  788035 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:16:52.187595  788035 main.go:141] libmachine: (test-preload-438901) DBG | Closing plugin on server side
	I1007 13:16:52.187605  788035 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:16:52.195909  788035 main.go:141] libmachine: Making call to close driver server
	I1007 13:16:52.195939  788035 main.go:141] libmachine: (test-preload-438901) Calling .Close
	I1007 13:16:52.196226  788035 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:16:52.196246  788035 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:16:52.238351  788035 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.005749696s)
	I1007 13:16:52.238412  788035 main.go:141] libmachine: Making call to close driver server
	I1007 13:16:52.238426  788035 main.go:141] libmachine: (test-preload-438901) Calling .Close
	I1007 13:16:52.238763  788035 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:16:52.238784  788035 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:16:52.238788  788035 main.go:141] libmachine: (test-preload-438901) DBG | Closing plugin on server side
	I1007 13:16:52.238793  788035 main.go:141] libmachine: Making call to close driver server
	I1007 13:16:52.238809  788035 main.go:141] libmachine: (test-preload-438901) Calling .Close
	I1007 13:16:52.239012  788035 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:16:52.239025  788035 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:16:52.241200  788035 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1007 13:16:52.242565  788035 addons.go:510] duration metric: took 1.378721577s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1007 13:16:53.067591  788035 node_ready.go:53] node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:55.068140  788035 node_ready.go:53] node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:57.567304  788035 node_ready.go:53] node "test-preload-438901" has status "Ready":"False"
	I1007 13:16:58.067468  788035 node_ready.go:49] node "test-preload-438901" has status "Ready":"True"
	I1007 13:16:58.067499  788035 node_ready.go:38] duration metric: took 7.004095977s for node "test-preload-438901" to be "Ready" ...
	I1007 13:16:58.067513  788035 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:16:58.072742  788035 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:58.078612  788035 pod_ready.go:93] pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:16:58.078663  788035 pod_ready.go:82] duration metric: took 5.885412ms for pod "coredns-6d4b75cb6d-wpmdb" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:58.078674  788035 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:58.084781  788035 pod_ready.go:93] pod "etcd-test-preload-438901" in "kube-system" namespace has status "Ready":"True"
	I1007 13:16:58.084811  788035 pod_ready.go:82] duration metric: took 6.129789ms for pod "etcd-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:16:58.084821  788035 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.090555  788035 pod_ready.go:103] pod "kube-apiserver-test-preload-438901" in "kube-system" namespace has status "Ready":"False"
	I1007 13:17:00.591667  788035 pod_ready.go:93] pod "kube-apiserver-test-preload-438901" in "kube-system" namespace has status "Ready":"True"
	I1007 13:17:00.591698  788035 pod_ready.go:82] duration metric: took 2.506868359s for pod "kube-apiserver-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.591713  788035 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.597004  788035 pod_ready.go:93] pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace has status "Ready":"True"
	I1007 13:17:00.597029  788035 pod_ready.go:82] duration metric: took 5.308065ms for pod "kube-controller-manager-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.597039  788035 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-flkxg" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.605653  788035 pod_ready.go:93] pod "kube-proxy-flkxg" in "kube-system" namespace has status "Ready":"True"
	I1007 13:17:00.605679  788035 pod_ready.go:82] duration metric: took 8.635026ms for pod "kube-proxy-flkxg" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.605689  788035 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.868572  788035 pod_ready.go:93] pod "kube-scheduler-test-preload-438901" in "kube-system" namespace has status "Ready":"True"
	I1007 13:17:00.868599  788035 pod_ready.go:82] duration metric: took 262.902931ms for pod "kube-scheduler-test-preload-438901" in "kube-system" namespace to be "Ready" ...
	I1007 13:17:00.868610  788035 pod_ready.go:39] duration metric: took 2.80108411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:17:00.868627  788035 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:17:00.868692  788035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:17:00.884984  788035 api_server.go:72] duration metric: took 10.021187361s to wait for apiserver process to appear ...
	I1007 13:17:00.885015  788035 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:17:00.885035  788035 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1007 13:17:00.890368  788035 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I1007 13:17:00.891433  788035 api_server.go:141] control plane version: v1.24.4
	I1007 13:17:00.891456  788035 api_server.go:131] duration metric: took 6.434058ms to wait for apiserver health ...
	I1007 13:17:00.891465  788035 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:17:01.069037  788035 system_pods.go:59] 7 kube-system pods found
	I1007 13:17:01.069073  788035 system_pods.go:61] "coredns-6d4b75cb6d-wpmdb" [b22c80db-6231-4394-b156-1957540d916b] Running
	I1007 13:17:01.069078  788035 system_pods.go:61] "etcd-test-preload-438901" [b711ddca-a452-48c9-bc40-03766d637ad5] Running
	I1007 13:17:01.069082  788035 system_pods.go:61] "kube-apiserver-test-preload-438901" [254cb8fc-94cb-4380-9ddf-b2d406bcf9fe] Running
	I1007 13:17:01.069087  788035 system_pods.go:61] "kube-controller-manager-test-preload-438901" [2bb988d7-9b6c-4dec-adf5-85ac50e40b77] Running
	I1007 13:17:01.069090  788035 system_pods.go:61] "kube-proxy-flkxg" [459352ca-053f-4f04-8b0b-7f9595171594] Running
	I1007 13:17:01.069093  788035 system_pods.go:61] "kube-scheduler-test-preload-438901" [f3c1f95e-c2e8-4702-a07b-8ba1b8d8b09d] Running
	I1007 13:17:01.069096  788035 system_pods.go:61] "storage-provisioner" [19e4864a-dceb-4582-9640-59e53fb889a9] Running
	I1007 13:17:01.069102  788035 system_pods.go:74] duration metric: took 177.631729ms to wait for pod list to return data ...
	I1007 13:17:01.069110  788035 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:17:01.268132  788035 default_sa.go:45] found service account: "default"
	I1007 13:17:01.268167  788035 default_sa.go:55] duration metric: took 199.0433ms for default service account to be created ...
	I1007 13:17:01.268176  788035 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:17:01.472778  788035 system_pods.go:86] 7 kube-system pods found
	I1007 13:17:01.472809  788035 system_pods.go:89] "coredns-6d4b75cb6d-wpmdb" [b22c80db-6231-4394-b156-1957540d916b] Running
	I1007 13:17:01.472817  788035 system_pods.go:89] "etcd-test-preload-438901" [b711ddca-a452-48c9-bc40-03766d637ad5] Running
	I1007 13:17:01.472823  788035 system_pods.go:89] "kube-apiserver-test-preload-438901" [254cb8fc-94cb-4380-9ddf-b2d406bcf9fe] Running
	I1007 13:17:01.472829  788035 system_pods.go:89] "kube-controller-manager-test-preload-438901" [2bb988d7-9b6c-4dec-adf5-85ac50e40b77] Running
	I1007 13:17:01.472833  788035 system_pods.go:89] "kube-proxy-flkxg" [459352ca-053f-4f04-8b0b-7f9595171594] Running
	I1007 13:17:01.472838  788035 system_pods.go:89] "kube-scheduler-test-preload-438901" [f3c1f95e-c2e8-4702-a07b-8ba1b8d8b09d] Running
	I1007 13:17:01.472843  788035 system_pods.go:89] "storage-provisioner" [19e4864a-dceb-4582-9640-59e53fb889a9] Running
	I1007 13:17:01.472853  788035 system_pods.go:126] duration metric: took 204.670021ms to wait for k8s-apps to be running ...
	I1007 13:17:01.472883  788035 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:17:01.472935  788035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:17:01.491068  788035 system_svc.go:56] duration metric: took 18.192518ms WaitForService to wait for kubelet
	I1007 13:17:01.491108  788035 kubeadm.go:582] duration metric: took 10.627317575s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:17:01.491127  788035 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:17:01.667348  788035 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:17:01.667377  788035 node_conditions.go:123] node cpu capacity is 2
	I1007 13:17:01.667387  788035 node_conditions.go:105] duration metric: took 176.255149ms to run NodePressure ...
	I1007 13:17:01.667399  788035 start.go:241] waiting for startup goroutines ...
	I1007 13:17:01.667406  788035 start.go:246] waiting for cluster config update ...
	I1007 13:17:01.667416  788035 start.go:255] writing updated cluster config ...
	I1007 13:17:01.667664  788035 ssh_runner.go:195] Run: rm -f paused
	I1007 13:17:01.718522  788035 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I1007 13:17:01.720557  788035 out.go:201] 
	W1007 13:17:01.721916  788035 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I1007 13:17:01.723278  788035 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1007 13:17:01.724603  788035 out.go:177] * Done! kubectl is now configured to use "test-preload-438901" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.655149762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307022655084455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=188ca196-2782-40fd-914c-78e7818603ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.655648998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44e5dfbf-75a3-494c-ab53-912df35fa604 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.655709240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44e5dfbf-75a3-494c-ab53-912df35fa604 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.655926733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e43c0cde183835021d1bfd38beb37d5aeb3a510e8e6c9fb24175bc1bb149e806,PodSandboxId:3ac229fb32190a6e79744f649e122edd31a4c880f62b88bf8536f3db1739b75c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728307016699021374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wpmdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22c80db-6231-4394-b156-1957540d916b,},Annotations:map[string]string{io.kubernetes.container.hash: 9aac65c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f967bec437dc63dfbbeadf22ed21754481c1786041d02d4e16f4b434288b6ff,PodSandboxId:e36beb128c3cb9055a9b428cb298587369753aaf67b911e3e3fa58da9616507d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728307009487503287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flkxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 459352ca-053f-4f04-8b0b-7f9595171594,},Annotations:map[string]string{io.kubernetes.container.hash: edb8f2a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412addccbe5da06ee8dbf30d229b148b40babffb5d3abfad1e15faf3b651530,PodSandboxId:620b6b725b2be9c32ece132683a4a9a2fc0cf4ebfe36112381b093638603d0cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728307009135096683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
e4864a-dceb-4582-9640-59e53fb889a9,},Annotations:map[string]string{io.kubernetes.container.hash: 463cca2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9462d04f38fd4bd4dd3309509ed3fc20a0f95ea4f50fc0be72142f97403f716,PodSandboxId:ad3846abb8d4c65e730b09466be9f399ca2bed097e27d491bc1241e6e3fb11b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728307002959811483,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd67a24a
bbfc28d247720883b993060,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af14cc23ef63f4724d1c070d14a87b3a5503047c5e0d7d07a005da510ec9cc9d,PodSandboxId:f41ca00e0c675cfbaf85718c558ea949d7dc90be89231eb399835a2f25d35322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728307002916484429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52b910dc287329523585306e4dd442c,},Annotations:map
[string]string{io.kubernetes.container.hash: d7b433fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d391bc51677ca86bfa32f54ffa16c8dac50b24235402db2a396aeb4161326fe0,PodSandboxId:86daebfa747e345d8495ba0f46fc635b74d6790500f9dbe86aca9c83be5dc52f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728307002871587030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a52bd6b401a775dc95321088fee3e65,},Annotations:map[string]str
ing{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa4a5cc173c11841ae463a5bfc615452dd5fcc06b73d6ce2713d154fde69288,PodSandboxId:ec6d2d48dbc6eed8e350c3de3bfa3470b98715deb8e43b34c2e74a19fc231c3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728307002821886386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a0941db532601c76e52e2b4610f1669,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44e5dfbf-75a3-494c-ab53-912df35fa604 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.695439748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb32a342-5567-41cb-b0d9-3aa340f01b70 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.695515853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb32a342-5567-41cb-b0d9-3aa340f01b70 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.696712505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c11d777-bb7e-454c-b846-c16056320b37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.697460445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307022697437632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c11d777-bb7e-454c-b846-c16056320b37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.698086838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e0cf653-9313-4d6c-ab23-efd95bff86fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.698137939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e0cf653-9313-4d6c-ab23-efd95bff86fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.698297128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e43c0cde183835021d1bfd38beb37d5aeb3a510e8e6c9fb24175bc1bb149e806,PodSandboxId:3ac229fb32190a6e79744f649e122edd31a4c880f62b88bf8536f3db1739b75c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728307016699021374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wpmdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22c80db-6231-4394-b156-1957540d916b,},Annotations:map[string]string{io.kubernetes.container.hash: 9aac65c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f967bec437dc63dfbbeadf22ed21754481c1786041d02d4e16f4b434288b6ff,PodSandboxId:e36beb128c3cb9055a9b428cb298587369753aaf67b911e3e3fa58da9616507d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728307009487503287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flkxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 459352ca-053f-4f04-8b0b-7f9595171594,},Annotations:map[string]string{io.kubernetes.container.hash: edb8f2a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412addccbe5da06ee8dbf30d229b148b40babffb5d3abfad1e15faf3b651530,PodSandboxId:620b6b725b2be9c32ece132683a4a9a2fc0cf4ebfe36112381b093638603d0cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728307009135096683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
e4864a-dceb-4582-9640-59e53fb889a9,},Annotations:map[string]string{io.kubernetes.container.hash: 463cca2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9462d04f38fd4bd4dd3309509ed3fc20a0f95ea4f50fc0be72142f97403f716,PodSandboxId:ad3846abb8d4c65e730b09466be9f399ca2bed097e27d491bc1241e6e3fb11b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728307002959811483,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd67a24a
bbfc28d247720883b993060,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af14cc23ef63f4724d1c070d14a87b3a5503047c5e0d7d07a005da510ec9cc9d,PodSandboxId:f41ca00e0c675cfbaf85718c558ea949d7dc90be89231eb399835a2f25d35322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728307002916484429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52b910dc287329523585306e4dd442c,},Annotations:map
[string]string{io.kubernetes.container.hash: d7b433fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d391bc51677ca86bfa32f54ffa16c8dac50b24235402db2a396aeb4161326fe0,PodSandboxId:86daebfa747e345d8495ba0f46fc635b74d6790500f9dbe86aca9c83be5dc52f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728307002871587030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a52bd6b401a775dc95321088fee3e65,},Annotations:map[string]str
ing{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa4a5cc173c11841ae463a5bfc615452dd5fcc06b73d6ce2713d154fde69288,PodSandboxId:ec6d2d48dbc6eed8e350c3de3bfa3470b98715deb8e43b34c2e74a19fc231c3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728307002821886386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a0941db532601c76e52e2b4610f1669,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e0cf653-9313-4d6c-ab23-efd95bff86fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.738451568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25743717-878e-46a2-bf33-fd5cd66e009c name=/runtime.v1.RuntimeService/Version
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.738527775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25743717-878e-46a2-bf33-fd5cd66e009c name=/runtime.v1.RuntimeService/Version
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.740241609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b5ad2d8-8420-4ad2-9905-5c45b6a3db98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.740671583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307022740649308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b5ad2d8-8420-4ad2-9905-5c45b6a3db98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.741457083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb61413c-5d60-41ed-a06c-ecbc6365c9c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.741525441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb61413c-5d60-41ed-a06c-ecbc6365c9c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.741694074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e43c0cde183835021d1bfd38beb37d5aeb3a510e8e6c9fb24175bc1bb149e806,PodSandboxId:3ac229fb32190a6e79744f649e122edd31a4c880f62b88bf8536f3db1739b75c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728307016699021374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wpmdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22c80db-6231-4394-b156-1957540d916b,},Annotations:map[string]string{io.kubernetes.container.hash: 9aac65c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f967bec437dc63dfbbeadf22ed21754481c1786041d02d4e16f4b434288b6ff,PodSandboxId:e36beb128c3cb9055a9b428cb298587369753aaf67b911e3e3fa58da9616507d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728307009487503287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flkxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 459352ca-053f-4f04-8b0b-7f9595171594,},Annotations:map[string]string{io.kubernetes.container.hash: edb8f2a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412addccbe5da06ee8dbf30d229b148b40babffb5d3abfad1e15faf3b651530,PodSandboxId:620b6b725b2be9c32ece132683a4a9a2fc0cf4ebfe36112381b093638603d0cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728307009135096683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
e4864a-dceb-4582-9640-59e53fb889a9,},Annotations:map[string]string{io.kubernetes.container.hash: 463cca2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9462d04f38fd4bd4dd3309509ed3fc20a0f95ea4f50fc0be72142f97403f716,PodSandboxId:ad3846abb8d4c65e730b09466be9f399ca2bed097e27d491bc1241e6e3fb11b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728307002959811483,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd67a24a
bbfc28d247720883b993060,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af14cc23ef63f4724d1c070d14a87b3a5503047c5e0d7d07a005da510ec9cc9d,PodSandboxId:f41ca00e0c675cfbaf85718c558ea949d7dc90be89231eb399835a2f25d35322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728307002916484429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52b910dc287329523585306e4dd442c,},Annotations:map
[string]string{io.kubernetes.container.hash: d7b433fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d391bc51677ca86bfa32f54ffa16c8dac50b24235402db2a396aeb4161326fe0,PodSandboxId:86daebfa747e345d8495ba0f46fc635b74d6790500f9dbe86aca9c83be5dc52f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728307002871587030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a52bd6b401a775dc95321088fee3e65,},Annotations:map[string]str
ing{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa4a5cc173c11841ae463a5bfc615452dd5fcc06b73d6ce2713d154fde69288,PodSandboxId:ec6d2d48dbc6eed8e350c3de3bfa3470b98715deb8e43b34c2e74a19fc231c3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728307002821886386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a0941db532601c76e52e2b4610f1669,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb61413c-5d60-41ed-a06c-ecbc6365c9c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.775946239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11dad1af-ec87-4176-a7f1-4c5a98935678 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.776043805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11dad1af-ec87-4176-a7f1-4c5a98935678 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.777220788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60bc48e8-8df1-4204-aa36-42e4bd4856ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.777902393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307022777874228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60bc48e8-8df1-4204-aa36-42e4bd4856ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.778698907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da7e2a4c-ce23-441c-9a37-08d5f5b6cdb9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.778823151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da7e2a4c-ce23-441c-9a37-08d5f5b6cdb9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:17:02 test-preload-438901 crio[674]: time="2024-10-07 13:17:02.779006948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e43c0cde183835021d1bfd38beb37d5aeb3a510e8e6c9fb24175bc1bb149e806,PodSandboxId:3ac229fb32190a6e79744f649e122edd31a4c880f62b88bf8536f3db1739b75c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728307016699021374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wpmdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22c80db-6231-4394-b156-1957540d916b,},Annotations:map[string]string{io.kubernetes.container.hash: 9aac65c3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f967bec437dc63dfbbeadf22ed21754481c1786041d02d4e16f4b434288b6ff,PodSandboxId:e36beb128c3cb9055a9b428cb298587369753aaf67b911e3e3fa58da9616507d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728307009487503287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flkxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 459352ca-053f-4f04-8b0b-7f9595171594,},Annotations:map[string]string{io.kubernetes.container.hash: edb8f2a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412addccbe5da06ee8dbf30d229b148b40babffb5d3abfad1e15faf3b651530,PodSandboxId:620b6b725b2be9c32ece132683a4a9a2fc0cf4ebfe36112381b093638603d0cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728307009135096683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
e4864a-dceb-4582-9640-59e53fb889a9,},Annotations:map[string]string{io.kubernetes.container.hash: 463cca2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9462d04f38fd4bd4dd3309509ed3fc20a0f95ea4f50fc0be72142f97403f716,PodSandboxId:ad3846abb8d4c65e730b09466be9f399ca2bed097e27d491bc1241e6e3fb11b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728307002959811483,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd67a24a
bbfc28d247720883b993060,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af14cc23ef63f4724d1c070d14a87b3a5503047c5e0d7d07a005da510ec9cc9d,PodSandboxId:f41ca00e0c675cfbaf85718c558ea949d7dc90be89231eb399835a2f25d35322,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728307002916484429,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52b910dc287329523585306e4dd442c,},Annotations:map
[string]string{io.kubernetes.container.hash: d7b433fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d391bc51677ca86bfa32f54ffa16c8dac50b24235402db2a396aeb4161326fe0,PodSandboxId:86daebfa747e345d8495ba0f46fc635b74d6790500f9dbe86aca9c83be5dc52f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728307002871587030,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a52bd6b401a775dc95321088fee3e65,},Annotations:map[string]str
ing{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa4a5cc173c11841ae463a5bfc615452dd5fcc06b73d6ce2713d154fde69288,PodSandboxId:ec6d2d48dbc6eed8e350c3de3bfa3470b98715deb8e43b34c2e74a19fc231c3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728307002821886386,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-438901,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a0941db532601c76e52e2b4610f1669,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da7e2a4c-ce23-441c-9a37-08d5f5b6cdb9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e43c0cde18383       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   3ac229fb32190       coredns-6d4b75cb6d-wpmdb
	8f967bec437dc       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   e36beb128c3cb       kube-proxy-flkxg
	0412addccbe5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   620b6b725b2be       storage-provisioner
	c9462d04f38fd       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   ad3846abb8d4c       kube-scheduler-test-preload-438901
	af14cc23ef63f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   f41ca00e0c675       etcd-test-preload-438901
	d391bc51677ca       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   86daebfa747e3       kube-apiserver-test-preload-438901
	3fa4a5cc173c1       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   ec6d2d48dbc6e       kube-controller-manager-test-preload-438901
	
	
	==> coredns [e43c0cde183835021d1bfd38beb37d5aeb3a510e8e6c9fb24175bc1bb149e806] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:55072 - 39748 "HINFO IN 1161056698354181934.5327192655246762098. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011855262s
	
	
	==> describe nodes <==
	Name:               test-preload-438901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-438901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=test-preload-438901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_15_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:15:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-438901
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:16:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:16:57 +0000   Mon, 07 Oct 2024 13:15:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:16:57 +0000   Mon, 07 Oct 2024 13:15:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:16:57 +0000   Mon, 07 Oct 2024 13:15:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:16:57 +0000   Mon, 07 Oct 2024 13:16:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    test-preload-438901
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22f34740531248ffa8a1c2b3b4643ec4
	  System UUID:                22f34740-5312-48ff-a8a1-c2b3b4643ec4
	  Boot ID:                    11a9aa29-c1c2-4ed4-bb81-6f24e3d31277
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wpmdb                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-test-preload-438901                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-438901             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-test-preload-438901    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-flkxg                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-test-preload-438901             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node test-preload-438901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node test-preload-438901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s                kubelet          Node test-preload-438901 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                80s                kubelet          Node test-preload-438901 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-438901 event: Registered Node test-preload-438901 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-438901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-438901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-438901 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-438901 event: Registered Node test-preload-438901 in Controller
	
	
	==> dmesg <==
	[Oct 7 13:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051159] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040473] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.851330] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.576035] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.614842] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.321051] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.057065] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065267] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.200534] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.121396] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.299377] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[ +14.186433] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.060892] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.208967] systemd-fstab-generator[1122]: Ignoring "noauto" option for root device
	[  +4.838880] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.161488] systemd-fstab-generator[1758]: Ignoring "noauto" option for root device
	[  +5.556088] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [af14cc23ef63f4724d1c070d14a87b3a5503047c5e0d7d07a005da510ec9cc9d] <==
	{"level":"info","ts":"2024-10-07T13:16:43.286Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fff3906243738b90","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-07T13:16:43.291Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-07T13:16:43.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(18443243650725153680)"}
	{"level":"info","ts":"2024-10-07T13:16:43.292Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","added-peer-id":"fff3906243738b90","added-peer-peer-urls":["https://192.168.39.238:2380"]}
	{"level":"info","ts":"2024-10-07T13:16:43.292Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:16:43.292Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:16:43.301Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T13:16:43.302Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fff3906243738b90","initial-advertise-peer-urls":["https://192.168.39.238:2380"],"listen-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T13:16:43.302Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T13:16:43.302Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-10-07T13:16:43.302Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 2"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became candidate at term 3"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became leader at term 3"}
	{"level":"info","ts":"2024-10-07T13:16:45.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fff3906243738b90 elected leader fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-10-07T13:16:45.034Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fff3906243738b90","local-member-attributes":"{Name:test-preload-438901 ClientURLs:[https://192.168.39.238:2379]}","request-path":"/0/members/fff3906243738b90/attributes","cluster-id":"3658928c14b8a733","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T13:16:45.035Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:16:45.035Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:16:45.037Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.238:2379"}
	{"level":"info","ts":"2024-10-07T13:16:45.037Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T13:16:45.037Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T13:16:45.037Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:17:03 up 0 min,  0 users,  load average: 1.11, 0.31, 0.10
	Linux test-preload-438901 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d391bc51677ca86bfa32f54ffa16c8dac50b24235402db2a396aeb4161326fe0] <==
	I1007 13:16:47.531242       1 establishing_controller.go:76] Starting EstablishingController
	I1007 13:16:47.531602       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1007 13:16:47.531654       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1007 13:16:47.531693       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1007 13:16:47.546256       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1007 13:16:47.546348       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1007 13:16:47.660657       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1007 13:16:47.700159       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1007 13:16:47.705334       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1007 13:16:47.705686       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1007 13:16:47.709930       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 13:16:47.719113       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 13:16:47.722564       1 cache.go:39] Caches are synced for autoregister controller
	I1007 13:16:47.722905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1007 13:16:47.747318       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1007 13:16:48.181068       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1007 13:16:48.522341       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 13:16:49.360635       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1007 13:16:49.390946       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1007 13:16:49.452366       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1007 13:16:49.478808       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 13:16:49.488568       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 13:16:49.852872       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1007 13:17:00.422446       1 controller.go:611] quota admission added evaluator for: endpoints
	I1007 13:17:00.617098       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3fa4a5cc173c11841ae463a5bfc615452dd5fcc06b73d6ce2713d154fde69288] <==
	I1007 13:17:00.432626       1 shared_informer.go:262] Caches are synced for node
	I1007 13:17:00.432667       1 range_allocator.go:173] Starting range CIDR allocator
	I1007 13:17:00.432672       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1007 13:17:00.432680       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1007 13:17:00.437096       1 shared_informer.go:262] Caches are synced for crt configmap
	I1007 13:17:00.440493       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1007 13:17:00.450102       1 shared_informer.go:262] Caches are synced for HPA
	I1007 13:17:00.451165       1 shared_informer.go:262] Caches are synced for GC
	I1007 13:17:00.459121       1 shared_informer.go:262] Caches are synced for deployment
	I1007 13:17:00.464633       1 shared_informer.go:262] Caches are synced for daemon sets
	I1007 13:17:00.486221       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1007 13:17:00.501681       1 shared_informer.go:262] Caches are synced for TTL
	I1007 13:17:00.513029       1 shared_informer.go:262] Caches are synced for taint
	I1007 13:17:00.513822       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1007 13:17:00.513959       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-438901. Assuming now as a timestamp.
	I1007 13:17:00.514016       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1007 13:17:00.514097       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1007 13:17:00.514817       1 event.go:294] "Event occurred" object="test-preload-438901" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-438901 event: Registered Node test-preload-438901 in Controller"
	I1007 13:17:00.518040       1 shared_informer.go:262] Caches are synced for persistent volume
	I1007 13:17:00.521956       1 shared_informer.go:262] Caches are synced for attach detach
	I1007 13:17:00.580972       1 shared_informer.go:262] Caches are synced for resource quota
	I1007 13:17:00.627498       1 shared_informer.go:262] Caches are synced for resource quota
	I1007 13:17:01.057794       1 shared_informer.go:262] Caches are synced for garbage collector
	I1007 13:17:01.057885       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1007 13:17:01.073223       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [8f967bec437dc63dfbbeadf22ed21754481c1786041d02d4e16f4b434288b6ff] <==
	I1007 13:16:49.798610       1 node.go:163] Successfully retrieved node IP: 192.168.39.238
	I1007 13:16:49.798890       1 server_others.go:138] "Detected node IP" address="192.168.39.238"
	I1007 13:16:49.799001       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1007 13:16:49.839203       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1007 13:16:49.839276       1 server_others.go:206] "Using iptables Proxier"
	I1007 13:16:49.839335       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1007 13:16:49.840144       1 server.go:661] "Version info" version="v1.24.4"
	I1007 13:16:49.840192       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:16:49.842451       1 config.go:317] "Starting service config controller"
	I1007 13:16:49.842981       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1007 13:16:49.843047       1 config.go:226] "Starting endpoint slice config controller"
	I1007 13:16:49.843066       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1007 13:16:49.846793       1 config.go:444] "Starting node config controller"
	I1007 13:16:49.847817       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1007 13:16:49.943724       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1007 13:16:49.943892       1 shared_informer.go:262] Caches are synced for service config
	I1007 13:16:49.948946       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c9462d04f38fd4bd4dd3309509ed3fc20a0f95ea4f50fc0be72142f97403f716] <==
	I1007 13:16:44.303555       1 serving.go:348] Generated self-signed cert in-memory
	W1007 13:16:47.575175       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 13:16:47.577112       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 13:16:47.577371       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 13:16:47.577400       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 13:16:47.657995       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1007 13:16:47.658037       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:16:47.666192       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1007 13:16:47.666379       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 13:16:47.666420       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 13:16:47.671023       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1007 13:16:47.767080       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.193259    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/459352ca-053f-4f04-8b0b-7f9595171594-lib-modules\") pod \"kube-proxy-flkxg\" (UID: \"459352ca-053f-4f04-8b0b-7f9595171594\") " pod="kube-system/kube-proxy-flkxg"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.193310    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd85w\" (UniqueName: \"kubernetes.io/projected/459352ca-053f-4f04-8b0b-7f9595171594-kube-api-access-pd85w\") pod \"kube-proxy-flkxg\" (UID: \"459352ca-053f-4f04-8b0b-7f9595171594\") " pod="kube-system/kube-proxy-flkxg"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.193362    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kbw\" (UniqueName: \"kubernetes.io/projected/b22c80db-6231-4394-b156-1957540d916b-kube-api-access-k5kbw\") pod \"coredns-6d4b75cb6d-wpmdb\" (UID: \"b22c80db-6231-4394-b156-1957540d916b\") " pod="kube-system/coredns-6d4b75cb6d-wpmdb"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.193428    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrv8h\" (UniqueName: \"kubernetes.io/projected/19e4864a-dceb-4582-9640-59e53fb889a9-kube-api-access-wrv8h\") pod \"storage-provisioner\" (UID: \"19e4864a-dceb-4582-9640-59e53fb889a9\") " pod="kube-system/storage-provisioner"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.193776    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume\") pod \"coredns-6d4b75cb6d-wpmdb\" (UID: \"b22c80db-6231-4394-b156-1957540d916b\") " pod="kube-system/coredns-6d4b75cb6d-wpmdb"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.193987    1129 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/19e4864a-dceb-4582-9640-59e53fb889a9-tmp\") pod \"storage-provisioner\" (UID: \"19e4864a-dceb-4582-9640-59e53fb889a9\") " pod="kube-system/storage-provisioner"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.194087    1129 reconciler.go:159] "Reconciler: start to sync state"
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.630861    1129 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mg9t\" (UniqueName: \"kubernetes.io/projected/e7a37ec8-8247-47b9-a89c-c31758f8941e-kube-api-access-9mg9t\") pod \"e7a37ec8-8247-47b9-a89c-c31758f8941e\" (UID: \"e7a37ec8-8247-47b9-a89c-c31758f8941e\") "
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.631033    1129 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7a37ec8-8247-47b9-a89c-c31758f8941e-config-volume\") pod \"e7a37ec8-8247-47b9-a89c-c31758f8941e\" (UID: \"e7a37ec8-8247-47b9-a89c-c31758f8941e\") "
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: E1007 13:16:48.631962    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: E1007 13:16:48.632494    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume podName:b22c80db-6231-4394-b156-1957540d916b nodeName:}" failed. No retries permitted until 2024-10-07 13:16:49.132405804 +0000 UTC m=+7.167935516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume") pod "coredns-6d4b75cb6d-wpmdb" (UID: "b22c80db-6231-4394-b156-1957540d916b") : object "kube-system"/"coredns" not registered
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: W1007 13:16:48.633279    1129 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/e7a37ec8-8247-47b9-a89c-c31758f8941e/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: W1007 13:16:48.633718    1129 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/e7a37ec8-8247-47b9-a89c-c31758f8941e/volumes/kubernetes.io~projected/kube-api-access-9mg9t: clearQuota called, but quotas disabled
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.634224    1129 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a37ec8-8247-47b9-a89c-c31758f8941e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7a37ec8-8247-47b9-a89c-c31758f8941e" (UID: "e7a37ec8-8247-47b9-a89c-c31758f8941e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.634238    1129 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a37ec8-8247-47b9-a89c-c31758f8941e-kube-api-access-9mg9t" (OuterVolumeSpecName: "kube-api-access-9mg9t") pod "e7a37ec8-8247-47b9-a89c-c31758f8941e" (UID: "e7a37ec8-8247-47b9-a89c-c31758f8941e"). InnerVolumeSpecName "kube-api-access-9mg9t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.731872    1129 reconciler.go:384] "Volume detached for volume \"kube-api-access-9mg9t\" (UniqueName: \"kubernetes.io/projected/e7a37ec8-8247-47b9-a89c-c31758f8941e-kube-api-access-9mg9t\") on node \"test-preload-438901\" DevicePath \"\""
	Oct 07 13:16:48 test-preload-438901 kubelet[1129]: I1007 13:16:48.732001    1129 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7a37ec8-8247-47b9-a89c-c31758f8941e-config-volume\") on node \"test-preload-438901\" DevicePath \"\""
	Oct 07 13:16:49 test-preload-438901 kubelet[1129]: E1007 13:16:49.135984    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 13:16:49 test-preload-438901 kubelet[1129]: E1007 13:16:49.136061    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume podName:b22c80db-6231-4394-b156-1957540d916b nodeName:}" failed. No retries permitted until 2024-10-07 13:16:50.136046252 +0000 UTC m=+8.171575949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume") pod "coredns-6d4b75cb6d-wpmdb" (UID: "b22c80db-6231-4394-b156-1957540d916b") : object "kube-system"/"coredns" not registered
	Oct 07 13:16:50 test-preload-438901 kubelet[1129]: E1007 13:16:50.145076    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 13:16:50 test-preload-438901 kubelet[1129]: E1007 13:16:50.145210    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume podName:b22c80db-6231-4394-b156-1957540d916b nodeName:}" failed. No retries permitted until 2024-10-07 13:16:52.145193037 +0000 UTC m=+10.180722735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume") pod "coredns-6d4b75cb6d-wpmdb" (UID: "b22c80db-6231-4394-b156-1957540d916b") : object "kube-system"/"coredns" not registered
	Oct 07 13:16:50 test-preload-438901 kubelet[1129]: E1007 13:16:50.220221    1129 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wpmdb" podUID=b22c80db-6231-4394-b156-1957540d916b
	Oct 07 13:16:50 test-preload-438901 kubelet[1129]: I1007 13:16:50.225142    1129 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e7a37ec8-8247-47b9-a89c-c31758f8941e path="/var/lib/kubelet/pods/e7a37ec8-8247-47b9-a89c-c31758f8941e/volumes"
	Oct 07 13:16:52 test-preload-438901 kubelet[1129]: E1007 13:16:52.160340    1129 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 13:16:52 test-preload-438901 kubelet[1129]: E1007 13:16:52.161248    1129 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume podName:b22c80db-6231-4394-b156-1957540d916b nodeName:}" failed. No retries permitted until 2024-10-07 13:16:56.161220389 +0000 UTC m=+14.196750100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b22c80db-6231-4394-b156-1957540d916b-config-volume") pod "coredns-6d4b75cb6d-wpmdb" (UID: "b22c80db-6231-4394-b156-1957540d916b") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [0412addccbe5da06ee8dbf30d229b148b40babffb5d3abfad1e15faf3b651530] <==
	I1007 13:16:49.266628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-438901 -n test-preload-438901
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-438901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-438901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-438901
--- FAIL: TestPreload (164.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (398.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1007 13:19:56.771533  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:20:13.698377  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m12.497809954s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-625039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-625039" primary control-plane node in "kubernetes-upgrade-625039" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:19:54.713884  792861 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:19:54.714058  792861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:19:54.714069  792861 out.go:358] Setting ErrFile to fd 2...
	I1007 13:19:54.714074  792861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:19:54.714320  792861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:19:54.714955  792861 out.go:352] Setting JSON to false
	I1007 13:19:54.716046  792861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10944,"bootTime":1728296251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:19:54.716175  792861 start.go:139] virtualization: kvm guest
	I1007 13:19:54.799653  792861 out.go:177] * [kubernetes-upgrade-625039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:19:54.857813  792861 notify.go:220] Checking for updates...
	I1007 13:19:54.857993  792861 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:19:54.973290  792861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:19:55.077847  792861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:19:55.206668  792861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:19:55.292728  792861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:19:55.392539  792861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:19:55.434556  792861 config.go:182] Loaded profile config "NoKubernetes-499494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:19:55.434766  792861 config.go:182] Loaded profile config "offline-crio-484725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:19:55.434880  792861 config.go:182] Loaded profile config "running-upgrade-533449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1007 13:19:55.435010  792861 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:19:55.563235  792861 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:19:55.651744  792861 start.go:297] selected driver: kvm2
	I1007 13:19:55.651779  792861 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:19:55.651797  792861 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:19:55.652954  792861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:19:55.653098  792861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:19:55.670767  792861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:19:55.670823  792861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:19:55.671095  792861 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 13:19:55.671126  792861 cni.go:84] Creating CNI manager for ""
	I1007 13:19:55.671175  792861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:19:55.671184  792861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 13:19:55.671252  792861 start.go:340] cluster config:
	{Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:19:55.671363  792861 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:19:55.766871  792861 out.go:177] * Starting "kubernetes-upgrade-625039" primary control-plane node in "kubernetes-upgrade-625039" cluster
	I1007 13:19:55.845679  792861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:19:55.845764  792861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 13:19:55.845782  792861 cache.go:56] Caching tarball of preloaded images
	I1007 13:19:55.845915  792861 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:19:55.845930  792861 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1007 13:19:55.846117  792861 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/config.json ...
	I1007 13:19:55.846143  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/config.json: {Name:mk21b19ce77a5b0cb4d7ed7b9d03abf5cafdd50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:19:55.846342  792861 start.go:360] acquireMachinesLock for kubernetes-upgrade-625039: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:20:35.615138  792861 start.go:364] duration metric: took 39.768703627s to acquireMachinesLock for "kubernetes-upgrade-625039"
	I1007 13:20:35.615220  792861 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:20:35.615335  792861 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:20:35.617605  792861 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 13:20:35.617838  792861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:20:35.617910  792861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:20:35.637134  792861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I1007 13:20:35.637630  792861 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:20:35.638387  792861 main.go:141] libmachine: Using API Version  1
	I1007 13:20:35.638410  792861 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:20:35.638965  792861 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:20:35.639166  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:20:35.639322  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:35.639481  792861 start.go:159] libmachine.API.Create for "kubernetes-upgrade-625039" (driver="kvm2")
	I1007 13:20:35.639518  792861 client.go:168] LocalClient.Create starting
	I1007 13:20:35.639555  792861 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 13:20:35.639600  792861 main.go:141] libmachine: Decoding PEM data...
	I1007 13:20:35.639621  792861 main.go:141] libmachine: Parsing certificate...
	I1007 13:20:35.639688  792861 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 13:20:35.639742  792861 main.go:141] libmachine: Decoding PEM data...
	I1007 13:20:35.639765  792861 main.go:141] libmachine: Parsing certificate...
	I1007 13:20:35.639796  792861 main.go:141] libmachine: Running pre-create checks...
	I1007 13:20:35.639815  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .PreCreateCheck
	I1007 13:20:35.640188  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetConfigRaw
	I1007 13:20:35.640608  792861 main.go:141] libmachine: Creating machine...
	I1007 13:20:35.640628  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .Create
	I1007 13:20:35.640801  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Creating KVM machine...
	I1007 13:20:35.642275  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found existing default KVM network
	I1007 13:20:35.644208  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:35.643898  793356 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:e6:c7} reservation:<nil>}
	I1007 13:20:35.645947  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:35.645724  793356 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:f8:11} reservation:<nil>}
	I1007 13:20:35.647557  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:35.647434  793356 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:6e:38} reservation:<nil>}
	I1007 13:20:35.648962  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:35.648852  793356 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038d170}
	I1007 13:20:35.649125  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | created network xml: 
	I1007 13:20:35.649156  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | <network>
	I1007 13:20:35.649169  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |   <name>mk-kubernetes-upgrade-625039</name>
	I1007 13:20:35.649190  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |   <dns enable='no'/>
	I1007 13:20:35.649202  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |   
	I1007 13:20:35.649214  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1007 13:20:35.649228  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |     <dhcp>
	I1007 13:20:35.649241  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1007 13:20:35.649253  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |     </dhcp>
	I1007 13:20:35.649264  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |   </ip>
	I1007 13:20:35.649277  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG |   
	I1007 13:20:35.649287  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | </network>
	I1007 13:20:35.649301  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | 
	I1007 13:20:35.655578  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | trying to create private KVM network mk-kubernetes-upgrade-625039 192.168.72.0/24...
	I1007 13:20:35.759970  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | private KVM network mk-kubernetes-upgrade-625039 192.168.72.0/24 created
	I1007 13:20:35.760013  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039 ...
	I1007 13:20:35.760029  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:35.759925  793356 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:20:35.760136  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:20:35.760190  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:20:36.064753  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:36.064530  793356 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa...
	I1007 13:20:36.296424  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:36.296242  793356 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/kubernetes-upgrade-625039.rawdisk...
	I1007 13:20:36.296464  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Writing magic tar header
	I1007 13:20:36.296521  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Writing SSH key tar header
	I1007 13:20:36.296581  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:36.296375  793356 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039 ...
	I1007 13:20:36.296608  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039 (perms=drwx------)
	I1007 13:20:36.296635  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:20:36.296652  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 13:20:36.296666  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039
	I1007 13:20:36.296686  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 13:20:36.296696  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:20:36.296714  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 13:20:36.296724  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:20:36.296776  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:20:36.296808  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Checking permissions on dir: /home
	I1007 13:20:36.296845  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 13:20:36.296862  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Skipping /home - not owner
	I1007 13:20:36.296897  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:20:36.296919  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:20:36.296943  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Creating domain...
	I1007 13:20:36.299010  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) define libvirt domain using xml: 
	I1007 13:20:36.299048  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) <domain type='kvm'>
	I1007 13:20:36.299060  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <name>kubernetes-upgrade-625039</name>
	I1007 13:20:36.299067  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <memory unit='MiB'>2200</memory>
	I1007 13:20:36.299075  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <vcpu>2</vcpu>
	I1007 13:20:36.299082  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <features>
	I1007 13:20:36.299090  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <acpi/>
	I1007 13:20:36.299097  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <apic/>
	I1007 13:20:36.299105  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <pae/>
	I1007 13:20:36.299111  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     
	I1007 13:20:36.299118  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   </features>
	I1007 13:20:36.299125  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <cpu mode='host-passthrough'>
	I1007 13:20:36.299132  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   
	I1007 13:20:36.299138  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   </cpu>
	I1007 13:20:36.299146  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <os>
	I1007 13:20:36.299153  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <type>hvm</type>
	I1007 13:20:36.299161  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <boot dev='cdrom'/>
	I1007 13:20:36.299187  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <boot dev='hd'/>
	I1007 13:20:36.299200  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <bootmenu enable='no'/>
	I1007 13:20:36.299206  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   </os>
	I1007 13:20:36.299227  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   <devices>
	I1007 13:20:36.299239  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <disk type='file' device='cdrom'>
	I1007 13:20:36.299259  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/boot2docker.iso'/>
	I1007 13:20:36.299270  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <target dev='hdc' bus='scsi'/>
	I1007 13:20:36.299278  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <readonly/>
	I1007 13:20:36.299285  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </disk>
	I1007 13:20:36.299298  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <disk type='file' device='disk'>
	I1007 13:20:36.299312  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:20:36.299334  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/kubernetes-upgrade-625039.rawdisk'/>
	I1007 13:20:36.299345  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <target dev='hda' bus='virtio'/>
	I1007 13:20:36.299353  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </disk>
	I1007 13:20:36.299361  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <interface type='network'>
	I1007 13:20:36.299375  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <source network='mk-kubernetes-upgrade-625039'/>
	I1007 13:20:36.299386  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <model type='virtio'/>
	I1007 13:20:36.299396  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </interface>
	I1007 13:20:36.299407  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <interface type='network'>
	I1007 13:20:36.299420  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <source network='default'/>
	I1007 13:20:36.299427  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <model type='virtio'/>
	I1007 13:20:36.299437  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </interface>
	I1007 13:20:36.299448  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <serial type='pty'>
	I1007 13:20:36.299459  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <target port='0'/>
	I1007 13:20:36.299470  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </serial>
	I1007 13:20:36.299483  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <console type='pty'>
	I1007 13:20:36.299495  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <target type='serial' port='0'/>
	I1007 13:20:36.299507  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </console>
	I1007 13:20:36.299518  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     <rng model='virtio'>
	I1007 13:20:36.299532  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)       <backend model='random'>/dev/random</backend>
	I1007 13:20:36.299541  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     </rng>
	I1007 13:20:36.299551  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     
	I1007 13:20:36.299561  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)     
	I1007 13:20:36.299573  792861 main.go:141] libmachine: (kubernetes-upgrade-625039)   </devices>
	I1007 13:20:36.299583  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) </domain>
	I1007 13:20:36.299598  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) 
	I1007 13:20:36.304439  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:20:4a:b5 in network default
	I1007 13:20:36.305211  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Ensuring networks are active...
	I1007 13:20:36.305242  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:36.306186  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Ensuring network default is active
	I1007 13:20:36.306625  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Ensuring network mk-kubernetes-upgrade-625039 is active
	I1007 13:20:36.307263  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Getting domain xml...
	I1007 13:20:36.308127  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Creating domain...
	I1007 13:20:36.690890  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Waiting to get IP...
	I1007 13:20:36.691763  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:36.692268  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:36.692311  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:36.692261  793356 retry.go:31] will retry after 297.869942ms: waiting for machine to come up
	I1007 13:20:36.991841  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:36.992303  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:36.992333  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:36.992260  793356 retry.go:31] will retry after 328.417833ms: waiting for machine to come up
	I1007 13:20:37.322847  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:37.323382  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:37.323411  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:37.323299  793356 retry.go:31] will retry after 364.034318ms: waiting for machine to come up
	I1007 13:20:37.688690  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:37.689177  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:37.689205  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:37.689123  793356 retry.go:31] will retry after 560.024462ms: waiting for machine to come up
	I1007 13:20:38.251040  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:38.251476  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:38.251502  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:38.251435  793356 retry.go:31] will retry after 474.865109ms: waiting for machine to come up
	I1007 13:20:38.728145  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:38.728695  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:38.728745  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:38.728649  793356 retry.go:31] will retry after 806.598302ms: waiting for machine to come up
	I1007 13:20:39.536729  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:39.537300  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:39.537332  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:39.537238  793356 retry.go:31] will retry after 1.1529542s: waiting for machine to come up
	I1007 13:20:40.691936  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:40.692455  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:40.692489  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:40.692397  793356 retry.go:31] will retry after 1.117520391s: waiting for machine to come up
	I1007 13:20:41.811463  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:41.812131  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:41.812166  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:41.812082  793356 retry.go:31] will retry after 1.63183418s: waiting for machine to come up
	I1007 13:20:43.446141  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:43.446608  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:43.446642  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:43.446541  793356 retry.go:31] will retry after 2.292442528s: waiting for machine to come up
	I1007 13:20:45.740508  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:45.740939  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:45.740964  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:45.740893  793356 retry.go:31] will retry after 1.763787895s: waiting for machine to come up
	I1007 13:20:47.507026  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:47.507486  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:47.507517  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:47.507432  793356 retry.go:31] will retry after 2.315249055s: waiting for machine to come up
	I1007 13:20:49.826326  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:49.826837  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:49.826884  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:49.826797  793356 retry.go:31] will retry after 4.054239916s: waiting for machine to come up
	I1007 13:20:53.883675  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:53.884273  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find current IP address of domain kubernetes-upgrade-625039 in network mk-kubernetes-upgrade-625039
	I1007 13:20:53.884298  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | I1007 13:20:53.884227  793356 retry.go:31] will retry after 4.4195928s: waiting for machine to come up
	I1007 13:20:58.307589  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.308331  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Found IP for machine: 192.168.72.158
	I1007 13:20:58.308368  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has current primary IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.308378  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Reserving static IP address...
	I1007 13:20:58.308961  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-625039", mac: "52:54:00:9c:29:3b", ip: "192.168.72.158"} in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.394733  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Reserved static IP address: 192.168.72.158
	I1007 13:20:58.394768  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Getting to WaitForSSH function...
	I1007 13:20:58.394778  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Waiting for SSH to be available...
	I1007 13:20:58.398192  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.398753  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:58.398787  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.399253  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Using SSH client type: external
	I1007 13:20:58.399287  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa (-rw-------)
	I1007 13:20:58.399332  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:20:58.399355  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | About to run SSH command:
	I1007 13:20:58.399374  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | exit 0
	I1007 13:20:58.535313  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | SSH cmd err, output: <nil>: 
	I1007 13:20:58.535562  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) KVM machine creation complete!
	I1007 13:20:58.535948  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetConfigRaw
	I1007 13:20:58.536626  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:58.536856  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:58.537047  792861 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:20:58.537062  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetState
	I1007 13:20:58.538479  792861 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:20:58.538497  792861 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:20:58.538505  792861 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:20:58.538513  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:58.541252  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.541594  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:58.541628  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.541903  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:58.542135  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.542297  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.542485  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:58.542651  792861 main.go:141] libmachine: Using SSH client type: native
	I1007 13:20:58.542967  792861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:20:58.542993  792861 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:20:58.658584  792861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:20:58.658613  792861 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:20:58.658622  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:58.661778  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.662247  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:58.662279  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.662470  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:58.662744  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.662958  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.663124  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:58.663389  792861 main.go:141] libmachine: Using SSH client type: native
	I1007 13:20:58.663647  792861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:20:58.663662  792861 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:20:58.787428  792861 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:20:58.787522  792861 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:20:58.787537  792861 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:20:58.787551  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:20:58.787942  792861 buildroot.go:166] provisioning hostname "kubernetes-upgrade-625039"
	I1007 13:20:58.788033  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:20:58.788265  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:58.791404  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.791752  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:58.791799  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.792118  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:58.792292  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.792429  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.792542  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:58.792693  792861 main.go:141] libmachine: Using SSH client type: native
	I1007 13:20:58.792948  792861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:20:58.792975  792861 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-625039 && echo "kubernetes-upgrade-625039" | sudo tee /etc/hostname
	I1007 13:20:58.922640  792861 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-625039
	
	I1007 13:20:58.922671  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:58.925770  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.926180  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:58.926206  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:58.926466  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:58.926635  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.926776  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:58.926944  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:58.927131  792861 main.go:141] libmachine: Using SSH client type: native
	I1007 13:20:58.927349  792861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:20:58.927375  792861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-625039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-625039/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-625039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:20:59.053869  792861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:20:59.053907  792861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:20:59.053982  792861 buildroot.go:174] setting up certificates
	I1007 13:20:59.054001  792861 provision.go:84] configureAuth start
	I1007 13:20:59.054054  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:20:59.054493  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:20:59.057174  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.057597  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.057639  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.057878  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.060400  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.060758  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.060807  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.061007  792861 provision.go:143] copyHostCerts
	I1007 13:20:59.061084  792861 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:20:59.061109  792861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:20:59.061186  792861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:20:59.061325  792861 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:20:59.061336  792861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:20:59.061368  792861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:20:59.061444  792861 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:20:59.061455  792861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:20:59.061482  792861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:20:59.061551  792861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-625039 san=[127.0.0.1 192.168.72.158 kubernetes-upgrade-625039 localhost minikube]
	I1007 13:20:59.182344  792861 provision.go:177] copyRemoteCerts
	I1007 13:20:59.182414  792861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:20:59.182449  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.185070  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.185446  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.185476  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.185602  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:59.185817  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.186006  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:59.186155  792861 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:20:59.278647  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:20:59.305977  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1007 13:20:59.334041  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:20:59.362846  792861 provision.go:87] duration metric: took 308.826509ms to configureAuth
	I1007 13:20:59.362888  792861 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:20:59.363122  792861 config.go:182] Loaded profile config "kubernetes-upgrade-625039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 13:20:59.363242  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.367007  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.367437  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.367468  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.367765  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:59.368005  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.368192  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.368384  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:59.368570  792861 main.go:141] libmachine: Using SSH client type: native
	I1007 13:20:59.368824  792861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:20:59.368848  792861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:20:59.628958  792861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:20:59.629029  792861 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:20:59.629043  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetURL
	I1007 13:20:59.630458  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | Using libvirt version 6000000
	I1007 13:20:59.633151  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.633458  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.633491  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.633639  792861 main.go:141] libmachine: Docker is up and running!
	I1007 13:20:59.633655  792861 main.go:141] libmachine: Reticulating splines...
	I1007 13:20:59.633662  792861 client.go:171] duration metric: took 23.994137375s to LocalClient.Create
	I1007 13:20:59.633689  792861 start.go:167] duration metric: took 23.994210903s to libmachine.API.Create "kubernetes-upgrade-625039"
	I1007 13:20:59.633699  792861 start.go:293] postStartSetup for "kubernetes-upgrade-625039" (driver="kvm2")
	I1007 13:20:59.633710  792861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:20:59.633729  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:59.633980  792861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:20:59.634006  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.636203  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.636728  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.636766  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.637027  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:59.637322  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.637535  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:59.637697  792861 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:20:59.730067  792861 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:20:59.735727  792861 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:20:59.735766  792861 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:20:59.735863  792861 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:20:59.735971  792861 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:20:59.736082  792861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:20:59.747264  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:20:59.775289  792861 start.go:296] duration metric: took 141.571586ms for postStartSetup
	I1007 13:20:59.775360  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetConfigRaw
	I1007 13:20:59.776109  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:20:59.779085  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.779519  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.779545  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.779901  792861 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/config.json ...
	I1007 13:20:59.780143  792861 start.go:128] duration metric: took 24.164792629s to createHost
	I1007 13:20:59.780172  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.782957  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.783440  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.783465  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.783681  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:59.783892  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.784044  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.784203  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:59.784391  792861 main.go:141] libmachine: Using SSH client type: native
	I1007 13:20:59.784621  792861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:20:59.784628  792861 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:20:59.903546  792861 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728307259.857740506
	
	I1007 13:20:59.903577  792861 fix.go:216] guest clock: 1728307259.857740506
	I1007 13:20:59.903588  792861 fix.go:229] Guest: 2024-10-07 13:20:59.857740506 +0000 UTC Remote: 2024-10-07 13:20:59.780159247 +0000 UTC m=+65.116998251 (delta=77.581259ms)
	I1007 13:20:59.903637  792861 fix.go:200] guest clock delta is within tolerance: 77.581259ms
	I1007 13:20:59.903645  792861 start.go:83] releasing machines lock for "kubernetes-upgrade-625039", held for 24.2884634s
	I1007 13:20:59.903679  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:59.903997  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:20:59.907184  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.907578  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.907608  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.907804  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:59.908423  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:59.908641  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:20:59.908760  792861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:20:59.908857  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.908881  792861 ssh_runner.go:195] Run: cat /version.json
	I1007 13:20:59.908912  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:20:59.912166  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.912246  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.912607  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.912642  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.912670  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:20:59.912685  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:20:59.912783  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:59.912937  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:20:59.913006  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.913155  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:20:59.913173  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:59.913344  792861 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:20:59.913356  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:20:59.913622  792861 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:20:59.999842  792861 ssh_runner.go:195] Run: systemctl --version
	I1007 13:21:00.031050  792861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:21:00.203479  792861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:21:00.210730  792861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:21:00.210802  792861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:21:00.229941  792861 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:21:00.229978  792861 start.go:495] detecting cgroup driver to use...
	I1007 13:21:00.230084  792861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:21:00.250817  792861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:21:00.270868  792861 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:21:00.270935  792861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:21:00.287661  792861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:21:00.305464  792861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:21:00.463804  792861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:21:00.646303  792861 docker.go:233] disabling docker service ...
	I1007 13:21:00.646373  792861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:21:00.669165  792861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:21:00.687575  792861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:21:00.819869  792861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:21:00.954684  792861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:21:00.971262  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:21:00.992730  792861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1007 13:21:00.992804  792861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:21:01.005625  792861 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:21:01.005700  792861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:21:01.019217  792861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:21:01.030845  792861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:21:01.042262  792861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:21:01.054131  792861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:21:01.065046  792861 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:21:01.065120  792861 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:21:01.079687  792861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:21:01.089909  792861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:21:01.208839  792861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:21:01.308128  792861 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:21:01.308204  792861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:21:01.312925  792861 start.go:563] Will wait 60s for crictl version
	I1007 13:21:01.312996  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:01.317029  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:21:01.358553  792861 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:21:01.358662  792861 ssh_runner.go:195] Run: crio --version
	I1007 13:21:01.388629  792861 ssh_runner.go:195] Run: crio --version
	I1007 13:21:01.422988  792861 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1007 13:21:01.424597  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:21:01.427373  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:21:01.427679  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:20:50 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:21:01.427713  792861 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:21:01.427999  792861 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 13:21:01.432405  792861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:21:01.445979  792861 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.20.0 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:21:01.446142  792861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:21:01.446201  792861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:21:01.480050  792861 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:21:01.480135  792861 ssh_runner.go:195] Run: which lz4
	I1007 13:21:01.484436  792861 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:21:01.488946  792861 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:21:01.489003  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1007 13:21:03.333779  792861 crio.go:462] duration metric: took 1.849371985s to copy over tarball
	I1007 13:21:03.333891  792861 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:21:06.064009  792861 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.730073392s)
	I1007 13:21:06.064059  792861 crio.go:469] duration metric: took 2.730237084s to extract the tarball
	I1007 13:21:06.064071  792861 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:21:06.115757  792861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:21:06.167143  792861 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:21:06.167179  792861 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 13:21:06.167273  792861 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:21:06.167324  792861 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:06.167358  792861 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.167306  792861 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.167387  792861 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.167334  792861 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1007 13:21:06.167427  792861 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.167331  792861 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.168788  792861 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.168802  792861 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.168905  792861 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.169216  792861 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1007 13:21:06.169292  792861 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:06.169326  792861 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.169826  792861 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.169828  792861 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:21:06.346021  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.353784  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.358611  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.359835  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.362179  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.400720  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:06.419551  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1007 13:21:06.461046  792861 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1007 13:21:06.461089  792861 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.461142  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.478719  792861 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1007 13:21:06.478830  792861 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.478899  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.520870  792861 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1007 13:21:06.520920  792861 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.520996  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.532902  792861 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1007 13:21:06.532954  792861 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.533041  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.543041  792861 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1007 13:21:06.543095  792861 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.543151  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.571791  792861 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1007 13:21:06.571852  792861 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:06.571909  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.576223  792861 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1007 13:21:06.576275  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.576287  792861 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1007 13:21:06.576319  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.576353  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.576375  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.576327  792861 ssh_runner.go:195] Run: which crictl
	I1007 13:21:06.576448  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.580206  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:06.729939  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:21:06.730006  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.730073  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.733787  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.773930  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.774147  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.776970  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:06.889118  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:21:06.900019  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:21:06.900078  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:21:06.919817  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:21:06.927763  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:21:06.927763  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:21:06.945729  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:21:07.101019  792861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:21:07.121472  792861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:21:07.123588  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1007 13:21:07.123651  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1007 13:21:07.123680  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1007 13:21:07.123784  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1007 13:21:07.123838  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1007 13:21:07.134473  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1007 13:21:07.294253  792861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1007 13:21:07.294344  792861 cache_images.go:92] duration metric: took 1.127148891s to LoadCachedImages
	W1007 13:21:07.294425  792861 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1007 13:21:07.294442  792861 kubeadm.go:934] updating node { 192.168.72.158 8443 v1.20.0 crio true true} ...
	I1007 13:21:07.294576  792861 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-625039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:21:07.294656  792861 ssh_runner.go:195] Run: crio config
	I1007 13:21:07.349803  792861 cni.go:84] Creating CNI manager for ""
	I1007 13:21:07.349837  792861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:21:07.349852  792861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:21:07.349874  792861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-625039 NodeName:kubernetes-upgrade-625039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 13:21:07.350350  792861 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-625039"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:21:07.350546  792861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 13:21:07.364563  792861 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:21:07.364633  792861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:21:07.376661  792861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1007 13:21:07.395620  792861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:21:07.414387  792861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1007 13:21:07.434686  792861 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I1007 13:21:07.439129  792861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:21:07.452632  792861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:21:07.578358  792861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:21:07.596644  792861 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039 for IP: 192.168.72.158
	I1007 13:21:07.596674  792861 certs.go:194] generating shared ca certs ...
	I1007 13:21:07.596693  792861 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:07.596889  792861 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:21:07.596943  792861 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:21:07.596952  792861 certs.go:256] generating profile certs ...
	I1007 13:21:07.597032  792861 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.key
	I1007 13:21:07.597053  792861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.crt with IP's: []
	I1007 13:21:07.754755  792861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.crt ...
	I1007 13:21:07.754802  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.crt: {Name:mk0e43f29f8f4745536d2b10e937a87537d5d96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:07.810148  792861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.key ...
	I1007 13:21:07.810230  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.key: {Name:mkffde4f241be9da79919dfdfed5e571fd8a937a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:07.810438  792861 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key.89dfb328
	I1007 13:21:07.810467  792861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt.89dfb328 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.158]
	I1007 13:21:07.959060  792861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt.89dfb328 ...
	I1007 13:21:07.959097  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt.89dfb328: {Name:mk64b33e3fe63ffb206472a930f232d97e950a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:08.043280  792861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key.89dfb328 ...
	I1007 13:21:08.043331  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key.89dfb328: {Name:mk76e7aacc19c2532b3ce4f8c9ae439cf3045e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:08.043502  792861 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt.89dfb328 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt
	I1007 13:21:08.043642  792861 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key.89dfb328 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key
	I1007 13:21:08.043749  792861 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.key
	I1007 13:21:08.043775  792861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.crt with IP's: []
	I1007 13:21:08.137859  792861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.crt ...
	I1007 13:21:08.137905  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.crt: {Name:mkbe88a7e4b686b1c0c2c11e05d1d8a19d01aafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:08.138147  792861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.key ...
	I1007 13:21:08.138280  792861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.key: {Name:mke76ea2eefb12722e96b73f3c556a7d9ac35db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:21:08.138627  792861 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:21:08.138678  792861 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:21:08.138687  792861 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:21:08.138733  792861 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:21:08.138778  792861 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:21:08.138808  792861 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:21:08.138888  792861 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:21:08.139858  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:21:08.176463  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:21:08.211186  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:21:08.239526  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:21:08.272228  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 13:21:08.307278  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:21:08.337627  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:21:08.366864  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:21:08.398759  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:21:08.434212  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:21:08.476519  792861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:21:08.517541  792861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:21:08.554675  792861 ssh_runner.go:195] Run: openssl version
	I1007 13:21:08.563840  792861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:21:08.590527  792861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:21:08.597341  792861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:21:08.597474  792861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:21:08.607848  792861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:21:08.624553  792861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:21:08.641356  792861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:21:08.647910  792861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:21:08.648010  792861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:21:08.655892  792861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:21:08.670823  792861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:21:08.685204  792861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:21:08.690621  792861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:21:08.690685  792861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:21:08.697649  792861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:21:08.710583  792861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:21:08.715499  792861 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:21:08.715565  792861 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.20.0 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:21:08.715683  792861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:21:08.715777  792861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:21:08.769729  792861 cri.go:89] found id: ""
	I1007 13:21:08.769843  792861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:21:08.782588  792861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:21:08.794605  792861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:21:08.806589  792861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:21:08.806612  792861 kubeadm.go:157] found existing configuration files:
	
	I1007 13:21:08.806668  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:21:08.817846  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:21:08.817918  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:21:08.830276  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:21:08.843876  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:21:08.843944  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:21:08.855076  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:21:08.865569  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:21:08.865667  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:21:08.878175  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:21:08.890215  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:21:08.890293  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:21:08.903014  792861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:21:09.050137  792861 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:21:09.050342  792861 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:21:09.222363  792861 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:21:09.222507  792861 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:21:09.222631  792861 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:21:09.507509  792861 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:21:09.603713  792861 out.go:235]   - Generating certificates and keys ...
	I1007 13:21:09.603885  792861 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:21:09.603992  792861 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:21:09.731503  792861 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:21:09.974454  792861 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:21:10.060468  792861 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:21:10.215236  792861 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:21:10.426725  792861 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:21:10.426982  792861 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-625039 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I1007 13:21:10.587395  792861 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:21:10.587584  792861 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-625039 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I1007 13:21:10.740134  792861 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:21:11.158510  792861 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:21:12.042949  792861 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:21:12.043224  792861 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:21:12.370227  792861 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:21:12.613390  792861 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:21:12.853280  792861 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:21:13.008140  792861 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:21:13.026854  792861 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:21:13.026987  792861 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:21:13.027071  792861 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:21:13.186551  792861 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:21:13.188287  792861 out.go:235]   - Booting up control plane ...
	I1007 13:21:13.188437  792861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:21:13.199372  792861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:21:13.199544  792861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:21:13.204497  792861 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:21:13.211988  792861 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:21:53.175310  792861 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:21:53.176015  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:21:53.176331  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:21:58.175194  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:21:58.175446  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:22:08.174152  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:22:08.174449  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:22:28.173858  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:22:28.174161  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:23:08.172919  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:23:08.173178  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:23:08.173207  792861 kubeadm.go:310] 
	I1007 13:23:08.173289  792861 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:23:08.173369  792861 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:23:08.173379  792861 kubeadm.go:310] 
	I1007 13:23:08.173445  792861 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:23:08.173503  792861 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:23:08.173651  792861 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:23:08.173662  792861 kubeadm.go:310] 
	I1007 13:23:08.173830  792861 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:23:08.173880  792861 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:23:08.173935  792861 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:23:08.173946  792861 kubeadm.go:310] 
	I1007 13:23:08.174112  792861 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:23:08.174217  792861 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:23:08.174230  792861 kubeadm.go:310] 
	I1007 13:23:08.174338  792861 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:23:08.174451  792861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:23:08.174540  792861 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:23:08.174602  792861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:23:08.174611  792861 kubeadm.go:310] 
	I1007 13:23:08.174895  792861 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:23:08.174999  792861 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:23:08.175085  792861 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:23:08.175277  792861 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-625039 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-625039 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-625039 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-625039 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:23:08.175321  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:23:09.345816  792861 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.170453297s)
	I1007 13:23:09.345899  792861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:23:09.360820  792861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:23:09.371117  792861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:23:09.371149  792861 kubeadm.go:157] found existing configuration files:
	
	I1007 13:23:09.371214  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:23:09.380989  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:23:09.381055  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:23:09.391493  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:23:09.401138  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:23:09.401201  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:23:09.411389  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:23:09.421309  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:23:09.421380  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:23:09.431536  792861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:23:09.442177  792861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:23:09.442246  792861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:23:09.452625  792861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:23:09.688779  792861 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:25:06.343727  792861 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:25:06.343883  792861 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:25:06.346681  792861 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:25:06.346782  792861 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:25:06.346879  792861 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:25:06.347029  792861 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:25:06.347194  792861 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:25:06.347287  792861 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:25:06.349113  792861 out.go:235]   - Generating certificates and keys ...
	I1007 13:25:06.349232  792861 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:25:06.349341  792861 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:25:06.349485  792861 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:25:06.349603  792861 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:25:06.349724  792861 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:25:06.349839  792861 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:25:06.349934  792861 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:25:06.350014  792861 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:25:06.350140  792861 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:25:06.350237  792861 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:25:06.350290  792861 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:25:06.350366  792861 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:25:06.350431  792861 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:25:06.350499  792861 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:25:06.350580  792861 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:25:06.350656  792861 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:25:06.350788  792861 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:25:06.350898  792861 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:25:06.350952  792861 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:25:06.351041  792861 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:25:06.352707  792861 out.go:235]   - Booting up control plane ...
	I1007 13:25:06.352834  792861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:25:06.352957  792861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:25:06.353071  792861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:25:06.353233  792861 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:25:06.353498  792861 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:25:06.353602  792861 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:25:06.353724  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:25:06.353985  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:25:06.354118  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:25:06.354366  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:25:06.354470  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:25:06.354744  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:25:06.354812  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:25:06.355089  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:25:06.355196  792861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:25:06.355509  792861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:25:06.355525  792861 kubeadm.go:310] 
	I1007 13:25:06.355577  792861 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:25:06.355627  792861 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:25:06.355643  792861 kubeadm.go:310] 
	I1007 13:25:06.355690  792861 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:25:06.355742  792861 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:25:06.355913  792861 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:25:06.355924  792861 kubeadm.go:310] 
	I1007 13:25:06.356077  792861 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:25:06.356143  792861 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:25:06.356200  792861 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:25:06.356211  792861 kubeadm.go:310] 
	I1007 13:25:06.356368  792861 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:25:06.356499  792861 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:25:06.356514  792861 kubeadm.go:310] 
	I1007 13:25:06.356680  792861 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:25:06.356821  792861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:25:06.356920  792861 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:25:06.357015  792861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:25:06.357112  792861 kubeadm.go:394] duration metric: took 3m57.641550821s to StartCluster
	I1007 13:25:06.357164  792861 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:25:06.357236  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:25:06.357339  792861 kubeadm.go:310] 
	I1007 13:25:06.416817  792861 cri.go:89] found id: ""
	I1007 13:25:06.416869  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.416882  792861 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:25:06.416891  792861 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:25:06.416969  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:25:06.464662  792861 cri.go:89] found id: ""
	I1007 13:25:06.464700  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.464712  792861 logs.go:284] No container was found matching "etcd"
	I1007 13:25:06.464721  792861 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:25:06.464792  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:25:06.514690  792861 cri.go:89] found id: ""
	I1007 13:25:06.514725  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.514736  792861 logs.go:284] No container was found matching "coredns"
	I1007 13:25:06.514746  792861 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:25:06.514824  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:25:06.574203  792861 cri.go:89] found id: ""
	I1007 13:25:06.574237  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.574248  792861 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:25:06.574256  792861 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:25:06.574328  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:25:06.621607  792861 cri.go:89] found id: ""
	I1007 13:25:06.621640  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.621652  792861 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:25:06.621660  792861 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:25:06.621738  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:25:06.676336  792861 cri.go:89] found id: ""
	I1007 13:25:06.676367  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.676378  792861 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:25:06.676386  792861 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:25:06.676464  792861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:25:06.723159  792861 cri.go:89] found id: ""
	I1007 13:25:06.723195  792861 logs.go:282] 0 containers: []
	W1007 13:25:06.723220  792861 logs.go:284] No container was found matching "kindnet"
	I1007 13:25:06.723236  792861 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:25:06.723255  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:25:06.873131  792861 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:25:06.873162  792861 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:25:06.873178  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:25:07.002287  792861 logs.go:123] Gathering logs for container status ...
	I1007 13:25:07.002334  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:25:07.053135  792861 logs.go:123] Gathering logs for kubelet ...
	I1007 13:25:07.053174  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:25:07.117410  792861 logs.go:123] Gathering logs for dmesg ...
	I1007 13:25:07.117473  792861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1007 13:25:07.136733  792861 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:25:07.136885  792861 out.go:270] * 
	* 
	W1007 13:25:07.136988  792861 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:25:07.137034  792861 out.go:270] * 
	* 
	W1007 13:25:07.138312  792861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:25:07.142510  792861 out.go:201] 
	W1007 13:25:07.144061  792861 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:25:07.144171  792861 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:25:07.144228  792861 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:25:07.146107  792861 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-625039
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-625039: (2.349945638s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-625039 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-625039 status --format={{.Host}}: exit status 7 (76.618845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1007 13:25:13.698125  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.291791484s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-625039 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.820938ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-625039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-625039
	    minikube start -p kubernetes-upgrade-625039 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6250392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-625039 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-625039 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.80856732s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-07 13:26:29.904394966 +0000 UTC m=+4723.102934964
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-625039 -n kubernetes-upgrade-625039
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-625039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-625039 logs -n 25: (1.886321086s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p pause-011126                       | pause-011126              | jenkins | v1.34.0 | 07 Oct 24 13:22 UTC | 07 Oct 24 13:23 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-499494 sudo           | NoKubernetes-499494       | jenkins | v1.34.0 | 07 Oct 24 13:22 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-499494                | NoKubernetes-499494       | jenkins | v1.34.0 | 07 Oct 24 13:22 UTC | 07 Oct 24 13:22 UTC |
	| start   | -p cert-expiration-004876             | cert-expiration-004876    | jenkins | v1.34.0 | 07 Oct 24 13:22 UTC | 07 Oct 24 13:24 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-193275 stop           | minikube                  | jenkins | v1.26.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:23 UTC |
	| start   | -p stopped-upgrade-193275             | stopped-upgrade-193275    | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:24 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-011126                       | pause-011126              | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:23 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-011126                       | pause-011126              | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:23 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-011126                       | pause-011126              | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:23 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-011126                       | pause-011126              | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:23 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-011126                       | pause-011126              | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:23 UTC |
	| start   | -p force-systemd-flag-028990          | force-systemd-flag-028990 | jenkins | v1.34.0 | 07 Oct 24 13:23 UTC | 07 Oct 24 13:24 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-193275             | stopped-upgrade-193275    | jenkins | v1.34.0 | 07 Oct 24 13:24 UTC | 07 Oct 24 13:24 UTC |
	| start   | -p cert-options-079658                | cert-options-079658       | jenkins | v1.34.0 | 07 Oct 24 13:24 UTC | 07 Oct 24 13:25 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-028990 ssh cat     | force-systemd-flag-028990 | jenkins | v1.34.0 | 07 Oct 24 13:24 UTC | 07 Oct 24 13:24 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-028990          | force-systemd-flag-028990 | jenkins | v1.34.0 | 07 Oct 24 13:24 UTC | 07 Oct 24 13:24 UTC |
	| start   | -p old-k8s-version-120978             | old-k8s-version-120978    | jenkins | v1.34.0 | 07 Oct 24 13:24 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-625039          | kubernetes-upgrade-625039 | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:25 UTC |
	| start   | -p kubernetes-upgrade-625039          | kubernetes-upgrade-625039 | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:25 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-079658 ssh               | cert-options-079658       | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:25 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-079658 -- sudo        | cert-options-079658       | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:25 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-079658                | cert-options-079658       | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:25 UTC |
	| start   | -p no-preload-016701                  | no-preload-016701         | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039          | kubernetes-upgrade-625039 | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039          | kubernetes-upgrade-625039 | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:25:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:25:56.144234  798060 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:25:56.144493  798060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:25:56.144501  798060 out.go:358] Setting ErrFile to fd 2...
	I1007 13:25:56.144506  798060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:25:56.144675  798060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:25:56.145309  798060 out.go:352] Setting JSON to false
	I1007 13:25:56.146315  798060 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11305,"bootTime":1728296251,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:25:56.146379  798060 start.go:139] virtualization: kvm guest
	I1007 13:25:56.148531  798060 out.go:177] * [kubernetes-upgrade-625039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:25:56.150160  798060 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:25:56.150224  798060 notify.go:220] Checking for updates...
	I1007 13:25:56.153079  798060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:25:56.154199  798060 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:25:56.155201  798060 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:25:56.156428  798060 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:25:56.157865  798060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:25:56.159981  798060 config.go:182] Loaded profile config "kubernetes-upgrade-625039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:25:56.160648  798060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:25:56.160758  798060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:25:56.176336  798060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36973
	I1007 13:25:56.176893  798060 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:25:56.177447  798060 main.go:141] libmachine: Using API Version  1
	I1007 13:25:56.177470  798060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:25:56.177821  798060 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:25:56.178078  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:25:56.178362  798060 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:25:56.178690  798060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:25:56.178735  798060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:25:56.194928  798060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I1007 13:25:56.195504  798060 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:25:56.196063  798060 main.go:141] libmachine: Using API Version  1
	I1007 13:25:56.196094  798060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:25:56.196446  798060 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:25:56.196649  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:25:56.236099  798060 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:25:56.237551  798060 start.go:297] selected driver: kvm2
	I1007 13:25:56.237568  798060 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:25:56.237677  798060 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:25:56.238467  798060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:25:56.238557  798060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:25:56.254444  798060 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:25:56.254901  798060 cni.go:84] Creating CNI manager for ""
	I1007 13:25:56.254958  798060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:25:56.254988  798060 start.go:340] cluster config:
	{Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-625039 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:25:56.255105  798060 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:25:56.257277  798060 out.go:177] * Starting "kubernetes-upgrade-625039" primary control-plane node in "kubernetes-upgrade-625039" cluster
	I1007 13:25:58.323356  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:25:58.323872  797609 main.go:141] libmachine: (no-preload-016701) DBG | unable to find current IP address of domain no-preload-016701 in network mk-no-preload-016701
	I1007 13:25:58.323900  797609 main.go:141] libmachine: (no-preload-016701) DBG | I1007 13:25:58.323840  797820 retry.go:31] will retry after 4.745796673s: waiting for machine to come up
	I1007 13:25:56.258954  798060 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:25:56.259012  798060 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:25:56.259027  798060 cache.go:56] Caching tarball of preloaded images
	I1007 13:25:56.259152  798060 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:25:56.259164  798060 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:25:56.259281  798060 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/config.json ...
	I1007 13:25:56.259500  798060 start.go:360] acquireMachinesLock for kubernetes-upgrade-625039: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:26:04.552738  798060 start.go:364] duration metric: took 8.293186812s to acquireMachinesLock for "kubernetes-upgrade-625039"
	I1007 13:26:04.552816  798060 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:26:04.552828  798060 fix.go:54] fixHost starting: 
	I1007 13:26:04.553418  798060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:26:04.553476  798060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:26:04.572664  798060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44221
	I1007 13:26:04.573180  798060 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:26:04.573636  798060 main.go:141] libmachine: Using API Version  1
	I1007 13:26:04.573659  798060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:26:04.573991  798060 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:26:04.574252  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:04.574424  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetState
	I1007 13:26:04.576294  798060 fix.go:112] recreateIfNeeded on kubernetes-upgrade-625039: state=Running err=<nil>
	W1007 13:26:04.576314  798060 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:26:04.578578  798060 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-625039" VM ...
	I1007 13:26:03.071025  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.071678  797609 main.go:141] libmachine: (no-preload-016701) Found IP for machine: 192.168.39.197
	I1007 13:26:03.071701  797609 main.go:141] libmachine: (no-preload-016701) Reserving static IP address...
	I1007 13:26:03.071711  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has current primary IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.072141  797609 main.go:141] libmachine: (no-preload-016701) DBG | unable to find host DHCP lease matching {name: "no-preload-016701", mac: "52:54:00:d2:1e:55", ip: "192.168.39.197"} in network mk-no-preload-016701
	I1007 13:26:03.158541  797609 main.go:141] libmachine: (no-preload-016701) DBG | Getting to WaitForSSH function...
	I1007 13:26:03.158574  797609 main.go:141] libmachine: (no-preload-016701) Reserved static IP address: 192.168.39.197
	I1007 13:26:03.158587  797609 main.go:141] libmachine: (no-preload-016701) Waiting for SSH to be available...
	I1007 13:26:03.161328  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.161874  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.161908  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.162130  797609 main.go:141] libmachine: (no-preload-016701) DBG | Using SSH client type: external
	I1007 13:26:03.162162  797609 main.go:141] libmachine: (no-preload-016701) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa (-rw-------)
	I1007 13:26:03.162195  797609 main.go:141] libmachine: (no-preload-016701) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:26:03.162210  797609 main.go:141] libmachine: (no-preload-016701) DBG | About to run SSH command:
	I1007 13:26:03.162258  797609 main.go:141] libmachine: (no-preload-016701) DBG | exit 0
	I1007 13:26:03.290531  797609 main.go:141] libmachine: (no-preload-016701) DBG | SSH cmd err, output: <nil>: 
	I1007 13:26:03.290801  797609 main.go:141] libmachine: (no-preload-016701) KVM machine creation complete!
	I1007 13:26:03.291220  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetConfigRaw
	I1007 13:26:03.291807  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:03.292024  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:03.292210  797609 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:26:03.292231  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:26:03.293616  797609 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:26:03.293633  797609 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:26:03.293640  797609 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:26:03.293649  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:03.296353  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.296765  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.296804  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.296997  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:03.297191  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.297312  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.297454  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:03.297653  797609 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:03.297918  797609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I1007 13:26:03.297933  797609 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:26:03.401621  797609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:26:03.401653  797609 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:26:03.401668  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:03.404702  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.405052  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.405089  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.405274  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:03.405468  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.405611  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.405727  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:03.405899  797609 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:03.406152  797609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I1007 13:26:03.406168  797609 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:26:03.511516  797609 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:26:03.511606  797609 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:26:03.511627  797609 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:26:03.511642  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetMachineName
	I1007 13:26:03.511953  797609 buildroot.go:166] provisioning hostname "no-preload-016701"
	I1007 13:26:03.511981  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetMachineName
	I1007 13:26:03.512176  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:03.515529  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.516081  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.516119  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.516364  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:03.516663  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.516875  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.517028  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:03.517318  797609 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:03.517517  797609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I1007 13:26:03.517530  797609 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-016701 && echo "no-preload-016701" | sudo tee /etc/hostname
	I1007 13:26:03.638617  797609 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-016701
	
	I1007 13:26:03.638653  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:03.641400  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.641789  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.641832  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.642003  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:03.642220  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.642394  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.642524  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:03.642675  797609 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:03.642868  797609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I1007 13:26:03.642884  797609 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-016701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-016701/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-016701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:26:03.755486  797609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:26:03.755554  797609 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:26:03.755609  797609 buildroot.go:174] setting up certificates
	I1007 13:26:03.755631  797609 provision.go:84] configureAuth start
	I1007 13:26:03.755652  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetMachineName
	I1007 13:26:03.755946  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetIP
	I1007 13:26:03.758657  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.759022  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.759043  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.759275  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:03.761781  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.762130  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.762173  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.762351  797609 provision.go:143] copyHostCerts
	I1007 13:26:03.762424  797609 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:26:03.762438  797609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:26:03.762509  797609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:26:03.762638  797609 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:26:03.762649  797609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:26:03.762680  797609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:26:03.762772  797609 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:26:03.762790  797609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:26:03.762827  797609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:26:03.762910  797609 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.no-preload-016701 san=[127.0.0.1 192.168.39.197 localhost minikube no-preload-016701]
	I1007 13:26:03.901358  797609 provision.go:177] copyRemoteCerts
	I1007 13:26:03.901418  797609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:26:03.901452  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:03.904584  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.904938  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:03.904972  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:03.905135  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:03.905333  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:03.905484  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:03.905692  797609 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:26:03.989565  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:26:04.016527  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1007 13:26:04.043768  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:26:04.070928  797609 provision.go:87] duration metric: took 315.277171ms to configureAuth
	I1007 13:26:04.070960  797609 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:26:04.071197  797609 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:26:04.071303  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:04.074257  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.074500  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.074526  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.074687  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:04.074908  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.075068  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.075198  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:04.075404  797609 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:04.075610  797609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I1007 13:26:04.075641  797609 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:26:04.306261  797609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:26:04.306304  797609 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:26:04.306313  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetURL
	I1007 13:26:04.307755  797609 main.go:141] libmachine: (no-preload-016701) DBG | Using libvirt version 6000000
	I1007 13:26:04.309804  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.310187  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.310214  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.310415  797609 main.go:141] libmachine: Docker is up and running!
	I1007 13:26:04.310433  797609 main.go:141] libmachine: Reticulating splines...
	I1007 13:26:04.310441  797609 client.go:171] duration metric: took 24.232276463s to LocalClient.Create
	I1007 13:26:04.310472  797609 start.go:167] duration metric: took 24.232346949s to libmachine.API.Create "no-preload-016701"
	I1007 13:26:04.310485  797609 start.go:293] postStartSetup for "no-preload-016701" (driver="kvm2")
	I1007 13:26:04.310500  797609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:26:04.310544  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:04.310810  797609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:26:04.310859  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:04.313100  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.313451  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.313488  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.313616  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:04.313786  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.313935  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:04.314108  797609 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:26:04.399901  797609 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:26:04.405145  797609 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:26:04.405183  797609 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:26:04.405254  797609 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:26:04.405346  797609 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:26:04.405489  797609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:26:04.417023  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:26:04.442562  797609 start.go:296] duration metric: took 132.06046ms for postStartSetup
	I1007 13:26:04.442623  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetConfigRaw
	I1007 13:26:04.443284  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetIP
	I1007 13:26:04.445716  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.445992  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.446018  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.446351  797609 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/config.json ...
	I1007 13:26:04.446532  797609 start.go:128] duration metric: took 24.390627403s to createHost
	I1007 13:26:04.446556  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:04.448842  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.449104  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.449127  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.449292  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:04.449471  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.449668  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.449804  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:04.449958  797609 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:04.450203  797609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I1007 13:26:04.450226  797609 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:26:04.552568  797609 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728307564.527367304
	
	I1007 13:26:04.552596  797609 fix.go:216] guest clock: 1728307564.527367304
	I1007 13:26:04.552604  797609 fix.go:229] Guest: 2024-10-07 13:26:04.527367304 +0000 UTC Remote: 2024-10-07 13:26:04.446544661 +0000 UTC m=+48.902123677 (delta=80.822643ms)
	I1007 13:26:04.552631  797609 fix.go:200] guest clock delta is within tolerance: 80.822643ms
	I1007 13:26:04.552636  797609 start.go:83] releasing machines lock for "no-preload-016701", held for 24.496934746s
	I1007 13:26:04.552661  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:04.552982  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetIP
	I1007 13:26:04.556275  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.556734  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.556769  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.557044  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:04.557617  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:04.557800  797609 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:26:04.557916  797609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:26:04.557985  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:04.558059  797609 ssh_runner.go:195] Run: cat /version.json
	I1007 13:26:04.558087  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:26:04.561394  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.561665  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.561836  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.561867  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.562107  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:04.562158  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:04.562183  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:04.562367  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:26:04.562421  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.562610  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:04.562631  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:26:04.562830  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:26:04.562854  797609 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:26:04.562988  797609 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:26:04.665431  797609 ssh_runner.go:195] Run: systemctl --version
	I1007 13:26:04.672137  797609 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:26:04.848993  797609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:26:04.855998  797609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:26:04.856067  797609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:26:04.876503  797609 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:26:04.876532  797609 start.go:495] detecting cgroup driver to use...
	I1007 13:26:04.876602  797609 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:26:04.896057  797609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:26:04.915021  797609 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:26:04.915111  797609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:26:04.931098  797609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:26:04.948635  797609 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:26:05.089967  797609 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:26:05.238324  797609 docker.go:233] disabling docker service ...
	I1007 13:26:05.238392  797609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:26:05.256678  797609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:26:05.273572  797609 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:26:05.439667  797609 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:26:05.576483  797609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:26:05.592948  797609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:26:05.614190  797609 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:26:05.614277  797609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.626218  797609 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:26:05.626322  797609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.638231  797609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.651129  797609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.663663  797609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:26:05.676536  797609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.689536  797609 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.708886  797609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:05.721060  797609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:26:05.731477  797609 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:26:05.731540  797609 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:26:05.745417  797609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:26:05.756504  797609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:26:05.884447  797609 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:26:05.982838  797609 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:26:05.982921  797609 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:26:05.987739  797609 start.go:563] Will wait 60s for crictl version
	I1007 13:26:05.987815  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:05.993511  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:26:06.037001  797609 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:26:06.037095  797609 ssh_runner.go:195] Run: crio --version
	I1007 13:26:06.067792  797609 ssh_runner.go:195] Run: crio --version
	I1007 13:26:06.105295  797609 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:26:04.580391  798060 machine.go:93] provisionDockerMachine start ...
	I1007 13:26:04.580424  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:04.580746  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:04.583460  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.583883  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:04.583927  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.584091  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:04.584308  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:04.584476  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:04.584644  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:04.584855  798060 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:04.585075  798060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:26:04.585088  798060 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:26:04.703322  798060 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-625039
	
	I1007 13:26:04.703365  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:26:04.703774  798060 buildroot.go:166] provisioning hostname "kubernetes-upgrade-625039"
	I1007 13:26:04.703808  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:26:04.704039  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:04.706817  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.707142  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:04.707174  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.707352  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:04.707570  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:04.707782  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:04.708487  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:04.708686  798060 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:04.708919  798060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:26:04.708937  798060 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-625039 && echo "kubernetes-upgrade-625039" | sudo tee /etc/hostname
	I1007 13:26:04.851448  798060 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-625039
	
	I1007 13:26:04.851485  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:04.854873  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.855338  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:04.855367  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.855573  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:04.855805  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:04.856002  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:04.856190  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:04.856392  798060 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:04.856585  798060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:26:04.856608  798060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-625039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-625039/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-625039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:26:04.980094  798060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:26:04.980137  798060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:26:04.980189  798060 buildroot.go:174] setting up certificates
	I1007 13:26:04.980204  798060 provision.go:84] configureAuth start
	I1007 13:26:04.980224  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetMachineName
	I1007 13:26:04.980724  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:26:04.984614  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.985129  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:04.985169  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.985353  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:04.988099  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.988530  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:04.988566  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:04.988780  798060 provision.go:143] copyHostCerts
	I1007 13:26:04.988845  798060 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:26:04.988871  798060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:26:04.988938  798060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:26:04.989079  798060 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:26:04.989090  798060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:26:04.989123  798060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:26:04.989209  798060 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:26:04.989218  798060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:26:04.989247  798060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:26:04.989320  798060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-625039 san=[127.0.0.1 192.168.72.158 kubernetes-upgrade-625039 localhost minikube]
	I1007 13:26:05.127769  798060 provision.go:177] copyRemoteCerts
	I1007 13:26:05.127842  798060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:26:05.127877  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:05.131570  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:05.131974  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:05.132004  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:05.132203  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:05.132384  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:05.132572  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:05.132717  798060 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:26:05.225773  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:26:05.254928  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1007 13:26:05.288483  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:26:05.316710  798060 provision.go:87] duration metric: took 336.485162ms to configureAuth
	I1007 13:26:05.316742  798060 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:26:05.316906  798060 config.go:182] Loaded profile config "kubernetes-upgrade-625039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:26:05.316997  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:05.319773  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:05.320231  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:05.320264  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:05.320503  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:05.320724  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:05.320892  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:05.321048  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:05.321202  798060 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:05.321423  798060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:26:05.321443  798060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:26:06.107160  797609 main.go:141] libmachine: (no-preload-016701) Calling .GetIP
	I1007 13:26:06.111522  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:06.111940  797609 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:26:06.112002  797609 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:26:06.112429  797609 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 13:26:06.117250  797609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:26:06.131890  797609 kubeadm.go:883] updating cluster {Name:no-preload-016701 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-016701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.197 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:26:06.132051  797609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:26:06.132103  797609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:26:06.168629  797609 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:26:06.168659  797609 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 13:26:06.168747  797609 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:06.168760  797609 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.168800  797609 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.168832  797609 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1007 13:26:06.168872  797609 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.168906  797609 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.168912  797609 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.168942  797609 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.170467  797609 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.170493  797609 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.170512  797609 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.170493  797609 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1007 13:26:06.170520  797609 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:06.170514  797609 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.170577  797609 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.170553  797609 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.326764  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.332810  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.333806  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.338511  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.347224  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1007 13:26:06.351488  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.352093  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.402362  797609 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1007 13:26:06.402418  797609 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.402474  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.524331  797609 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1007 13:26:06.524385  797609 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.524445  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.524454  797609 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1007 13:26:06.524492  797609 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.524532  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.541856  797609 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1007 13:26:06.541897  797609 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.541931  797609 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I1007 13:26:06.541944  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.541966  797609 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I1007 13:26:06.541991  797609 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1007 13:26:06.542003  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.542043  797609 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.542059  797609 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1007 13:26:06.542079  797609 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.542082  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.542106  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.542108  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:06.542136  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.542191  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.560976  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.629766  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.629809  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.629898  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.629903  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1007 13:26:06.629910  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.630010  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.679406  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.763672  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 13:26:06.800780  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1007 13:26:06.803252  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.803403  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1007 13:26:06.803437  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.803514  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1007 13:26:06.843944  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1007 13:26:06.895518  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1007 13:26:06.895654  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1007 13:26:06.932586  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I1007 13:26:06.994302  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1007 13:26:06.994333  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1007 13:26:06.994428  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1007 13:26:06.994441  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1007 13:26:06.994452  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1007 13:26:06.994529  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1007 13:26:07.000434  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1007 13:26:07.000489  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I1007 13:26:07.000521  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I1007 13:26:07.000538  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1007 13:26:07.028580  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I1007 13:26:07.028703  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I1007 13:26:07.076376  797609 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:07.083067  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1007 13:26:07.083090  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1007 13:26:07.083138  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I1007 13:26:07.083166  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I1007 13:26:07.083193  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1007 13:26:07.083197  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I1007 13:26:07.083226  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I1007 13:26:07.083249  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I1007 13:26:07.083193  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1007 13:26:07.083227  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I1007 13:26:07.141317  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I1007 13:26:07.141371  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I1007 13:26:07.212913  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I1007 13:26:07.212969  797609 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1007 13:26:07.212973  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I1007 13:26:07.212982  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I1007 13:26:07.213009  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I1007 13:26:07.213029  797609 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:07.213101  797609 ssh_runner.go:195] Run: which crictl
	I1007 13:26:07.272617  797609 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I1007 13:26:07.272706  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I1007 13:26:07.282928  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:08.057702  797609 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I1007 13:26:08.057768  797609 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1007 13:26:08.057847  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1007 13:26:08.057859  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:08.171587  797609 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:26:10.280233  797609 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.10860143s)
	I1007 13:26:10.280287  797609 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.222411191s)
	I1007 13:26:10.280307  797609 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1007 13:26:10.280315  797609 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 13:26:10.280348  797609 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1007 13:26:10.280397  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1007 13:26:10.280412  797609 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 13:26:13.142566  797609 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.862079605s)
	I1007 13:26:13.142632  797609 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 13:26:13.142665  797609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1007 13:26:13.142572  797609 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.862148824s)
	I1007 13:26:13.142721  797609 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1007 13:26:13.142758  797609 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1007 13:26:13.142808  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1007 13:26:15.232532  797609 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.089694955s)
	I1007 13:26:15.232563  797609 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1007 13:26:15.232589  797609 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1007 13:26:15.232642  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1007 13:26:11.511410  798060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:26:11.511445  798060 machine.go:96] duration metric: took 6.931031101s to provisionDockerMachine
	I1007 13:26:11.511462  798060 start.go:293] postStartSetup for "kubernetes-upgrade-625039" (driver="kvm2")
	I1007 13:26:11.511476  798060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:26:11.511505  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:11.511887  798060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:26:11.511924  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:11.515601  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.516033  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:11.516072  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.516271  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:11.516483  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:11.516684  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:11.516836  798060 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:26:11.609463  798060 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:26:11.613991  798060 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:26:11.614048  798060 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:26:11.614122  798060 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:26:11.614195  798060 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:26:11.614295  798060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:26:11.624889  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:26:11.652174  798060 start.go:296] duration metric: took 140.694441ms for postStartSetup
	I1007 13:26:11.652249  798060 fix.go:56] duration metric: took 7.099419499s for fixHost
	I1007 13:26:11.652278  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:11.655708  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.656086  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:11.656129  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.656303  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:11.656552  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:11.656741  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:11.656895  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:11.657042  798060 main.go:141] libmachine: Using SSH client type: native
	I1007 13:26:11.657253  798060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I1007 13:26:11.657268  798060 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:26:11.775361  798060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728307571.770981499
	
	I1007 13:26:11.775402  798060 fix.go:216] guest clock: 1728307571.770981499
	I1007 13:26:11.775413  798060 fix.go:229] Guest: 2024-10-07 13:26:11.770981499 +0000 UTC Remote: 2024-10-07 13:26:11.652255161 +0000 UTC m=+15.551181670 (delta=118.726338ms)
	I1007 13:26:11.775441  798060 fix.go:200] guest clock delta is within tolerance: 118.726338ms
	I1007 13:26:11.775449  798060 start.go:83] releasing machines lock for "kubernetes-upgrade-625039", held for 7.222658971s
	I1007 13:26:11.775490  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:11.775796  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:26:11.778810  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.779366  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:11.779412  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.779660  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:11.780340  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:11.780565  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .DriverName
	I1007 13:26:11.780663  798060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:26:11.780702  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:11.780863  798060 ssh_runner.go:195] Run: cat /version.json
	I1007 13:26:11.780897  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHHostname
	I1007 13:26:11.783472  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.783763  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.783952  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:11.783983  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.784129  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:11.784162  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:11.784174  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:11.784337  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHPort
	I1007 13:26:11.784438  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:11.784540  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHKeyPath
	I1007 13:26:11.784614  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:11.784669  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetSSHUsername
	I1007 13:26:11.784749  798060 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:26:11.784826  798060 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kubernetes-upgrade-625039/id_rsa Username:docker}
	I1007 13:26:11.892593  798060 ssh_runner.go:195] Run: systemctl --version
	I1007 13:26:11.900638  798060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:26:12.053531  798060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:26:12.062319  798060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:26:12.062393  798060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:26:12.073892  798060 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:26:12.073923  798060 start.go:495] detecting cgroup driver to use...
	I1007 13:26:12.073996  798060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:26:12.092402  798060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:26:12.108838  798060 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:26:12.108897  798060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:26:12.124460  798060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:26:12.139522  798060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:26:12.292097  798060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:26:12.438197  798060 docker.go:233] disabling docker service ...
	I1007 13:26:12.438318  798060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:26:12.463297  798060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:26:12.482998  798060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:26:12.657054  798060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:26:12.822723  798060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:26:12.837764  798060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:26:12.861918  798060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:26:12.862127  798060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.875643  798060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:26:12.875717  798060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.887387  798060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.898474  798060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.913281  798060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:26:12.929159  798060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.944652  798060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.957390  798060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:26:12.969101  798060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:26:12.979428  798060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:26:12.990353  798060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:26:13.156021  798060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:26:12.626227  797105 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:26:12.626908  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:26:12.627257  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:26:18.680474  798060 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.524401516s)
	I1007 13:26:18.680510  798060 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:26:18.680555  798060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:26:18.687348  798060 start.go:563] Will wait 60s for crictl version
	I1007 13:26:18.687431  798060 ssh_runner.go:195] Run: which crictl
	I1007 13:26:18.693467  798060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:26:18.738424  798060 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:26:18.738549  798060 ssh_runner.go:195] Run: crio --version
	I1007 13:26:18.769420  798060 ssh_runner.go:195] Run: crio --version
	I1007 13:26:18.800394  798060 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:26:17.100550  797609 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.86787747s)
	I1007 13:26:17.100590  797609 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1007 13:26:17.100619  797609 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1007 13:26:17.100675  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1007 13:26:19.473636  797609 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.37292361s)
	I1007 13:26:19.473675  797609 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1007 13:26:19.473708  797609 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1007 13:26:19.473772  797609 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1007 13:26:18.802216  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) Calling .GetIP
	I1007 13:26:18.805636  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:18.806166  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:29:3b", ip: ""} in network mk-kubernetes-upgrade-625039: {Iface:virbr4 ExpiryTime:2024-10-07 14:25:32 +0000 UTC Type:0 Mac:52:54:00:9c:29:3b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:kubernetes-upgrade-625039 Clientid:01:52:54:00:9c:29:3b}
	I1007 13:26:18.806205  798060 main.go:141] libmachine: (kubernetes-upgrade-625039) DBG | domain kubernetes-upgrade-625039 has defined IP address 192.168.72.158 and MAC address 52:54:00:9c:29:3b in network mk-kubernetes-upgrade-625039
	I1007 13:26:18.806501  798060 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 13:26:18.812049  798060 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:26:18.812212  798060 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:26:18.812281  798060 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:26:18.861978  798060 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:26:18.862014  798060 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:26:18.862087  798060 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:26:18.901404  798060 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:26:18.901429  798060 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:26:18.901437  798060 kubeadm.go:934] updating node { 192.168.72.158 8443 v1.31.1 crio true true} ...
	I1007 13:26:18.901540  798060 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-625039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:26:18.901602  798060 ssh_runner.go:195] Run: crio config
	I1007 13:26:18.959139  798060 cni.go:84] Creating CNI manager for ""
	I1007 13:26:18.959172  798060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:26:18.959185  798060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:26:18.959214  798060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-625039 NodeName:kubernetes-upgrade-625039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:26:18.959413  798060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-625039"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:26:18.959497  798060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:26:18.974757  798060 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:26:18.974860  798060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:26:18.985980  798060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1007 13:26:19.009077  798060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:26:19.032048  798060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1007 13:26:19.056514  798060 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I1007 13:26:19.061287  798060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:26:19.227530  798060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:26:19.243817  798060 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039 for IP: 192.168.72.158
	I1007 13:26:19.243852  798060 certs.go:194] generating shared ca certs ...
	I1007 13:26:19.243875  798060 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:26:19.244051  798060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:26:19.244091  798060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:26:19.244101  798060 certs.go:256] generating profile certs ...
	I1007 13:26:19.244179  798060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/client.key
	I1007 13:26:19.244220  798060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key.89dfb328
	I1007 13:26:19.244269  798060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.key
	I1007 13:26:19.244400  798060 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:26:19.244431  798060 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:26:19.244440  798060 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:26:19.244464  798060 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:26:19.244486  798060 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:26:19.244508  798060 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:26:19.244547  798060 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:26:19.245211  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:26:19.272554  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:26:19.299759  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:26:19.328009  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:26:19.357650  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 13:26:19.387892  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:26:19.416291  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:26:19.444713  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kubernetes-upgrade-625039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:26:19.473696  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:26:19.501400  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:26:19.529197  798060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:26:19.560834  798060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:26:19.580446  798060 ssh_runner.go:195] Run: openssl version
	I1007 13:26:19.586599  798060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:26:19.599056  798060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:26:19.604173  798060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:26:19.604249  798060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:26:19.610624  798060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:26:19.621897  798060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:26:19.635648  798060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:26:19.641003  798060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:26:19.641079  798060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:26:19.647473  798060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:26:19.659346  798060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:26:19.672090  798060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:26:19.677142  798060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:26:19.677219  798060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:26:19.683527  798060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:26:19.694494  798060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:26:19.699377  798060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:26:19.705386  798060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:26:19.711546  798060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:26:19.718299  798060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:26:19.724983  798060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:26:19.733277  798060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:26:19.741561  798060 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-625039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.31.1 ClusterName:kubernetes-upgrade-625039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:26:19.741673  798060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:26:19.741772  798060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:26:19.791809  798060 cri.go:89] found id: "10ab7783f1f25adae5c5e8e74087fb6a2052a0b2e98070f154ba0ec12a12f40d"
	I1007 13:26:19.791839  798060 cri.go:89] found id: "f0953026cc1f1af859f5036b8c16d3bb1a1be2b81eb14292273c77f801a35603"
	I1007 13:26:19.791845  798060 cri.go:89] found id: "1cda1ff0a84cef9b08569ab3752d0b3b57c4b769bd16b1d97fc5173d3af336ab"
	I1007 13:26:19.791850  798060 cri.go:89] found id: "c3e7c8b52c45025ce147e3b2c1da5e533a691c4e8e24b3f2cb7415e36a840c54"
	I1007 13:26:19.791854  798060 cri.go:89] found id: "c5f965fdf56857ea93f09736586b6ccae602c7eebbfab3188b072ee1e7cc79ec"
	I1007 13:26:19.791859  798060 cri.go:89] found id: "4d2169429dc79cef01208b4deb15017a68c8b3984bf29045eaa9ce23ccf8d231"
	I1007 13:26:19.791863  798060 cri.go:89] found id: "22c8c39b9f4c672018876d09925db0997026247b819f4bbcd09d11991ffa30e1"
	I1007 13:26:19.791867  798060 cri.go:89] found id: "0d9ee1af9ae563f18a817b3d6b02a49c692b750789c1ffbe3229a73073a1384d"
	I1007 13:26:19.791871  798060 cri.go:89] found id: ""
	I1007 13:26:19.791928  798060 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-625039 -n kubernetes-upgrade-625039
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-625039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-625039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-625039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-625039: (1.050739045s)
--- FAIL: TestKubernetesUpgrade (398.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (273.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-120978 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1007 13:24:53.448512  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-120978 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m33.403671816s)

                                                
                                                
-- stdout --
	* [old-k8s-version-120978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-120978" primary control-plane node in "old-k8s-version-120978" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:24:52.611236  797105 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:24:52.611343  797105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:24:52.611348  797105 out.go:358] Setting ErrFile to fd 2...
	I1007 13:24:52.611353  797105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:24:52.611525  797105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:24:52.612183  797105 out.go:352] Setting JSON to false
	I1007 13:24:52.613245  797105 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11242,"bootTime":1728296251,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:24:52.613356  797105 start.go:139] virtualization: kvm guest
	I1007 13:24:52.616594  797105 out.go:177] * [old-k8s-version-120978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:24:52.618170  797105 notify.go:220] Checking for updates...
	I1007 13:24:52.618190  797105 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:24:52.619827  797105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:24:52.621498  797105 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:24:52.622769  797105 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:24:52.624231  797105 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:24:52.625460  797105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:24:52.627336  797105 config.go:182] Loaded profile config "cert-expiration-004876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:24:52.627481  797105 config.go:182] Loaded profile config "cert-options-079658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:24:52.627615  797105 config.go:182] Loaded profile config "kubernetes-upgrade-625039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 13:24:52.627754  797105 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:24:52.665581  797105 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:24:52.666905  797105 start.go:297] selected driver: kvm2
	I1007 13:24:52.666924  797105 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:24:52.666940  797105 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:24:52.667677  797105 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:24:52.667783  797105 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:24:52.683121  797105 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:24:52.683186  797105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:24:52.683428  797105 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:24:52.683465  797105 cni.go:84] Creating CNI manager for ""
	I1007 13:24:52.683528  797105 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:24:52.683540  797105 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 13:24:52.683604  797105 start.go:340] cluster config:
	{Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:24:52.683758  797105 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:24:52.685672  797105 out.go:177] * Starting "old-k8s-version-120978" primary control-plane node in "old-k8s-version-120978" cluster
	I1007 13:24:52.686879  797105 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:24:52.686928  797105 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 13:24:52.686939  797105 cache.go:56] Caching tarball of preloaded images
	I1007 13:24:52.687043  797105 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:24:52.687056  797105 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1007 13:24:52.687170  797105 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/config.json ...
	I1007 13:24:52.687195  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/config.json: {Name:mkadee6e10eaed65a423b92c40d1dd1bebc00793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:24:52.687358  797105 start.go:360] acquireMachinesLock for old-k8s-version-120978: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:24:55.267565  797105 start.go:364] duration metric: took 2.58018092s to acquireMachinesLock for "old-k8s-version-120978"
	I1007 13:24:55.267652  797105 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:24:55.267746  797105 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:24:55.270548  797105 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 13:24:55.270773  797105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:24:55.270833  797105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:24:55.292071  797105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I1007 13:24:55.292734  797105 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:24:55.293411  797105 main.go:141] libmachine: Using API Version  1
	I1007 13:24:55.293438  797105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:24:55.293813  797105 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:24:55.294044  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:24:55.294218  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:24:55.294403  797105 start.go:159] libmachine.API.Create for "old-k8s-version-120978" (driver="kvm2")
	I1007 13:24:55.294436  797105 client.go:168] LocalClient.Create starting
	I1007 13:24:55.294468  797105 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 13:24:55.294511  797105 main.go:141] libmachine: Decoding PEM data...
	I1007 13:24:55.294529  797105 main.go:141] libmachine: Parsing certificate...
	I1007 13:24:55.294580  797105 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 13:24:55.294604  797105 main.go:141] libmachine: Decoding PEM data...
	I1007 13:24:55.294615  797105 main.go:141] libmachine: Parsing certificate...
	I1007 13:24:55.294632  797105 main.go:141] libmachine: Running pre-create checks...
	I1007 13:24:55.294649  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .PreCreateCheck
	I1007 13:24:55.295113  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetConfigRaw
	I1007 13:24:55.295540  797105 main.go:141] libmachine: Creating machine...
	I1007 13:24:55.295554  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .Create
	I1007 13:24:55.295707  797105 main.go:141] libmachine: (old-k8s-version-120978) Creating KVM machine...
	I1007 13:24:55.297155  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found existing default KVM network
	I1007 13:24:55.299131  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.298923  797172 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:b4:54} reservation:<nil>}
	I1007 13:24:55.301913  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.301789  797172 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1007 13:24:55.303169  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.303089  797172 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1f:c3:4e} reservation:<nil>}
	I1007 13:24:55.303974  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.303896  797172 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:a9:f8} reservation:<nil>}
	I1007 13:24:55.305332  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.305204  797172 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004725c0}
	I1007 13:24:55.305360  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | created network xml: 
	I1007 13:24:55.305371  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | <network>
	I1007 13:24:55.305378  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |   <name>mk-old-k8s-version-120978</name>
	I1007 13:24:55.305387  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |   <dns enable='no'/>
	I1007 13:24:55.305394  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |   
	I1007 13:24:55.305405  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I1007 13:24:55.305414  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |     <dhcp>
	I1007 13:24:55.305426  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I1007 13:24:55.305436  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |     </dhcp>
	I1007 13:24:55.305444  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |   </ip>
	I1007 13:24:55.305450  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG |   
	I1007 13:24:55.305469  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | </network>
	I1007 13:24:55.305477  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | 
	I1007 13:24:55.311458  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | trying to create private KVM network mk-old-k8s-version-120978 192.168.83.0/24...
	I1007 13:24:55.396665  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | private KVM network mk-old-k8s-version-120978 192.168.83.0/24 created
	I1007 13:24:55.396706  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.396620  797172 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:24:55.397027  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978 ...
	I1007 13:24:55.397103  797105 main.go:141] libmachine: (old-k8s-version-120978) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:24:55.397139  797105 main.go:141] libmachine: (old-k8s-version-120978) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:24:55.675828  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:55.675658  797172 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa...
	I1007 13:24:56.004694  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:56.004534  797172 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/old-k8s-version-120978.rawdisk...
	I1007 13:24:56.004732  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Writing magic tar header
	I1007 13:24:56.004764  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Writing SSH key tar header
	I1007 13:24:56.004789  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:56.004647  797172 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978 ...
	I1007 13:24:56.004809  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978 (perms=drwx------)
	I1007 13:24:56.004827  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978
	I1007 13:24:56.004840  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 13:24:56.004847  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:24:56.004853  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:24:56.004868  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 13:24:56.004880  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:24:56.004889  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:24:56.004904  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 13:24:56.004913  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Checking permissions on dir: /home
	I1007 13:24:56.004924  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Skipping /home - not owner
	I1007 13:24:56.004987  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 13:24:56.005010  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:24:56.005028  797105 main.go:141] libmachine: (old-k8s-version-120978) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:24:56.005042  797105 main.go:141] libmachine: (old-k8s-version-120978) Creating domain...
	I1007 13:24:56.006080  797105 main.go:141] libmachine: (old-k8s-version-120978) define libvirt domain using xml: 
	I1007 13:24:56.006104  797105 main.go:141] libmachine: (old-k8s-version-120978) <domain type='kvm'>
	I1007 13:24:56.006115  797105 main.go:141] libmachine: (old-k8s-version-120978)   <name>old-k8s-version-120978</name>
	I1007 13:24:56.006136  797105 main.go:141] libmachine: (old-k8s-version-120978)   <memory unit='MiB'>2200</memory>
	I1007 13:24:56.006150  797105 main.go:141] libmachine: (old-k8s-version-120978)   <vcpu>2</vcpu>
	I1007 13:24:56.006162  797105 main.go:141] libmachine: (old-k8s-version-120978)   <features>
	I1007 13:24:56.006173  797105 main.go:141] libmachine: (old-k8s-version-120978)     <acpi/>
	I1007 13:24:56.006185  797105 main.go:141] libmachine: (old-k8s-version-120978)     <apic/>
	I1007 13:24:56.006197  797105 main.go:141] libmachine: (old-k8s-version-120978)     <pae/>
	I1007 13:24:56.006205  797105 main.go:141] libmachine: (old-k8s-version-120978)     
	I1007 13:24:56.006212  797105 main.go:141] libmachine: (old-k8s-version-120978)   </features>
	I1007 13:24:56.006222  797105 main.go:141] libmachine: (old-k8s-version-120978)   <cpu mode='host-passthrough'>
	I1007 13:24:56.006230  797105 main.go:141] libmachine: (old-k8s-version-120978)   
	I1007 13:24:56.006239  797105 main.go:141] libmachine: (old-k8s-version-120978)   </cpu>
	I1007 13:24:56.006246  797105 main.go:141] libmachine: (old-k8s-version-120978)   <os>
	I1007 13:24:56.006264  797105 main.go:141] libmachine: (old-k8s-version-120978)     <type>hvm</type>
	I1007 13:24:56.006275  797105 main.go:141] libmachine: (old-k8s-version-120978)     <boot dev='cdrom'/>
	I1007 13:24:56.006286  797105 main.go:141] libmachine: (old-k8s-version-120978)     <boot dev='hd'/>
	I1007 13:24:56.006299  797105 main.go:141] libmachine: (old-k8s-version-120978)     <bootmenu enable='no'/>
	I1007 13:24:56.006309  797105 main.go:141] libmachine: (old-k8s-version-120978)   </os>
	I1007 13:24:56.006317  797105 main.go:141] libmachine: (old-k8s-version-120978)   <devices>
	I1007 13:24:56.006327  797105 main.go:141] libmachine: (old-k8s-version-120978)     <disk type='file' device='cdrom'>
	I1007 13:24:56.006344  797105 main.go:141] libmachine: (old-k8s-version-120978)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/boot2docker.iso'/>
	I1007 13:24:56.006355  797105 main.go:141] libmachine: (old-k8s-version-120978)       <target dev='hdc' bus='scsi'/>
	I1007 13:24:56.006365  797105 main.go:141] libmachine: (old-k8s-version-120978)       <readonly/>
	I1007 13:24:56.006374  797105 main.go:141] libmachine: (old-k8s-version-120978)     </disk>
	I1007 13:24:56.006384  797105 main.go:141] libmachine: (old-k8s-version-120978)     <disk type='file' device='disk'>
	I1007 13:24:56.006396  797105 main.go:141] libmachine: (old-k8s-version-120978)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:24:56.006414  797105 main.go:141] libmachine: (old-k8s-version-120978)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/old-k8s-version-120978.rawdisk'/>
	I1007 13:24:56.006425  797105 main.go:141] libmachine: (old-k8s-version-120978)       <target dev='hda' bus='virtio'/>
	I1007 13:24:56.006434  797105 main.go:141] libmachine: (old-k8s-version-120978)     </disk>
	I1007 13:24:56.006444  797105 main.go:141] libmachine: (old-k8s-version-120978)     <interface type='network'>
	I1007 13:24:56.006453  797105 main.go:141] libmachine: (old-k8s-version-120978)       <source network='mk-old-k8s-version-120978'/>
	I1007 13:24:56.006468  797105 main.go:141] libmachine: (old-k8s-version-120978)       <model type='virtio'/>
	I1007 13:24:56.006479  797105 main.go:141] libmachine: (old-k8s-version-120978)     </interface>
	I1007 13:24:56.006488  797105 main.go:141] libmachine: (old-k8s-version-120978)     <interface type='network'>
	I1007 13:24:56.006507  797105 main.go:141] libmachine: (old-k8s-version-120978)       <source network='default'/>
	I1007 13:24:56.006517  797105 main.go:141] libmachine: (old-k8s-version-120978)       <model type='virtio'/>
	I1007 13:24:56.006525  797105 main.go:141] libmachine: (old-k8s-version-120978)     </interface>
	I1007 13:24:56.006539  797105 main.go:141] libmachine: (old-k8s-version-120978)     <serial type='pty'>
	I1007 13:24:56.006552  797105 main.go:141] libmachine: (old-k8s-version-120978)       <target port='0'/>
	I1007 13:24:56.006562  797105 main.go:141] libmachine: (old-k8s-version-120978)     </serial>
	I1007 13:24:56.006571  797105 main.go:141] libmachine: (old-k8s-version-120978)     <console type='pty'>
	I1007 13:24:56.006582  797105 main.go:141] libmachine: (old-k8s-version-120978)       <target type='serial' port='0'/>
	I1007 13:24:56.006593  797105 main.go:141] libmachine: (old-k8s-version-120978)     </console>
	I1007 13:24:56.006603  797105 main.go:141] libmachine: (old-k8s-version-120978)     <rng model='virtio'>
	I1007 13:24:56.006637  797105 main.go:141] libmachine: (old-k8s-version-120978)       <backend model='random'>/dev/random</backend>
	I1007 13:24:56.006659  797105 main.go:141] libmachine: (old-k8s-version-120978)     </rng>
	I1007 13:24:56.006668  797105 main.go:141] libmachine: (old-k8s-version-120978)     
	I1007 13:24:56.006675  797105 main.go:141] libmachine: (old-k8s-version-120978)     
	I1007 13:24:56.006712  797105 main.go:141] libmachine: (old-k8s-version-120978)   </devices>
	I1007 13:24:56.006725  797105 main.go:141] libmachine: (old-k8s-version-120978) </domain>
	I1007 13:24:56.006740  797105 main.go:141] libmachine: (old-k8s-version-120978) 
	I1007 13:24:56.011497  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:14:fc:36 in network default
	I1007 13:24:56.012124  797105 main.go:141] libmachine: (old-k8s-version-120978) Ensuring networks are active...
	I1007 13:24:56.012151  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:56.013188  797105 main.go:141] libmachine: (old-k8s-version-120978) Ensuring network default is active
	I1007 13:24:56.013597  797105 main.go:141] libmachine: (old-k8s-version-120978) Ensuring network mk-old-k8s-version-120978 is active
	I1007 13:24:56.014196  797105 main.go:141] libmachine: (old-k8s-version-120978) Getting domain xml...
	I1007 13:24:56.014897  797105 main.go:141] libmachine: (old-k8s-version-120978) Creating domain...
	I1007 13:24:56.382168  797105 main.go:141] libmachine: (old-k8s-version-120978) Waiting to get IP...
	I1007 13:24:56.383055  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:56.383589  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:56.383629  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:56.383554  797172 retry.go:31] will retry after 194.82747ms: waiting for machine to come up
	I1007 13:24:56.580171  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:56.580753  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:56.580784  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:56.580698  797172 retry.go:31] will retry after 339.560003ms: waiting for machine to come up
	I1007 13:24:56.922333  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:56.922866  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:56.922893  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:56.922824  797172 retry.go:31] will retry after 444.709694ms: waiting for machine to come up
	I1007 13:24:57.369673  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:57.370277  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:57.370305  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:57.370220  797172 retry.go:31] will retry after 372.742538ms: waiting for machine to come up
	I1007 13:24:57.744947  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:57.745505  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:57.745535  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:57.745466  797172 retry.go:31] will retry after 491.864344ms: waiting for machine to come up
	I1007 13:24:58.239383  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:58.239987  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:58.240015  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:58.239889  797172 retry.go:31] will retry after 794.210973ms: waiting for machine to come up
	I1007 13:24:59.036574  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:59.037175  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:59.037202  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:59.037115  797172 retry.go:31] will retry after 852.405671ms: waiting for machine to come up
	I1007 13:24:59.891270  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:24:59.891802  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:24:59.891825  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:24:59.891756  797172 retry.go:31] will retry after 1.434440517s: waiting for machine to come up
	I1007 13:25:01.328381  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:01.328811  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:25:01.328840  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:25:01.328772  797172 retry.go:31] will retry after 1.540856642s: waiting for machine to come up
	I1007 13:25:02.871217  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:02.871723  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:25:02.871746  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:25:02.871692  797172 retry.go:31] will retry after 2.120880917s: waiting for machine to come up
	I1007 13:25:04.994558  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:04.995148  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:25:04.995178  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:25:04.995105  797172 retry.go:31] will retry after 2.882671644s: waiting for machine to come up
	I1007 13:25:07.879325  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:07.879815  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:25:07.879840  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:25:07.879786  797172 retry.go:31] will retry after 2.953467668s: waiting for machine to come up
	I1007 13:25:10.835630  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:10.836348  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:25:10.836367  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:25:10.836229  797172 retry.go:31] will retry after 3.183508717s: waiting for machine to come up
	I1007 13:25:14.023622  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:14.024086  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:25:14.024112  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:25:14.024044  797172 retry.go:31] will retry after 5.131366709s: waiting for machine to come up
	I1007 13:25:19.159442  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.160219  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has current primary IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.160242  797105 main.go:141] libmachine: (old-k8s-version-120978) Found IP for machine: 192.168.83.103
	I1007 13:25:19.160252  797105 main.go:141] libmachine: (old-k8s-version-120978) Reserving static IP address...
	I1007 13:25:19.160724  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-120978", mac: "52:54:00:ce:bc:6d", ip: "192.168.83.103"} in network mk-old-k8s-version-120978
	I1007 13:25:19.245630  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Getting to WaitForSSH function...
	I1007 13:25:19.245657  797105 main.go:141] libmachine: (old-k8s-version-120978) Reserved static IP address: 192.168.83.103
	I1007 13:25:19.245670  797105 main.go:141] libmachine: (old-k8s-version-120978) Waiting for SSH to be available...
	I1007 13:25:19.248427  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.248822  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.248843  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.248986  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Using SSH client type: external
	I1007 13:25:19.249026  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa (-rw-------)
	I1007 13:25:19.249059  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:25:19.249077  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | About to run SSH command:
	I1007 13:25:19.249093  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | exit 0
	I1007 13:25:19.378714  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | SSH cmd err, output: <nil>: 
	I1007 13:25:19.379000  797105 main.go:141] libmachine: (old-k8s-version-120978) KVM machine creation complete!
	I1007 13:25:19.379368  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetConfigRaw
	I1007 13:25:19.380037  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:19.380281  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:19.380487  797105 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:25:19.380501  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetState
	I1007 13:25:19.381877  797105 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:25:19.381894  797105 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:25:19.381901  797105 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:25:19.381910  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:19.384535  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.384922  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.384960  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.385086  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:19.385290  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.385438  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.385606  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:19.385773  797105 main.go:141] libmachine: Using SSH client type: native
	I1007 13:25:19.386053  797105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:25:19.386072  797105 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:25:19.497575  797105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:25:19.497604  797105 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:25:19.497613  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:19.501220  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.501611  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.501641  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.501828  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:19.502060  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.502232  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.502384  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:19.502541  797105 main.go:141] libmachine: Using SSH client type: native
	I1007 13:25:19.502743  797105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:25:19.502761  797105 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:25:19.618937  797105 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:25:19.619028  797105 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:25:19.619040  797105 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:25:19.619051  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:25:19.619330  797105 buildroot.go:166] provisioning hostname "old-k8s-version-120978"
	I1007 13:25:19.619361  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:25:19.619556  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:19.622552  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.623025  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.623056  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.623358  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:19.623658  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.623852  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.624064  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:19.624274  797105 main.go:141] libmachine: Using SSH client type: native
	I1007 13:25:19.624561  797105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:25:19.624589  797105 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-120978 && echo "old-k8s-version-120978" | sudo tee /etc/hostname
	I1007 13:25:19.757563  797105 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-120978
	
	I1007 13:25:19.757600  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:19.760488  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.760806  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.760839  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.760982  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:19.761165  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.761343  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.761503  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:19.761690  797105 main.go:141] libmachine: Using SSH client type: native
	I1007 13:25:19.761942  797105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:25:19.761964  797105 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-120978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-120978/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-120978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:25:19.884074  797105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:25:19.884109  797105 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:25:19.884177  797105 buildroot.go:174] setting up certificates
	I1007 13:25:19.884194  797105 provision.go:84] configureAuth start
	I1007 13:25:19.884207  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:25:19.884500  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:25:19.887548  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.887929  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.887960  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.888170  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:19.890646  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.891004  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.891044  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.891190  797105 provision.go:143] copyHostCerts
	I1007 13:25:19.891254  797105 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:25:19.891273  797105 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:25:19.891333  797105 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:25:19.891436  797105 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:25:19.891445  797105 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:25:19.891463  797105 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:25:19.891527  797105 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:25:19.891541  797105 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:25:19.891558  797105 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:25:19.891625  797105 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-120978 san=[127.0.0.1 192.168.83.103 localhost minikube old-k8s-version-120978]
	I1007 13:25:19.982187  797105 provision.go:177] copyRemoteCerts
	I1007 13:25:19.982252  797105 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:25:19.982281  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:19.985669  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.986093  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:19.986136  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:19.986357  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:19.986576  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:19.986821  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:19.986989  797105 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:25:20.076506  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:25:20.102551  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:25:20.128502  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1007 13:25:20.154289  797105 provision.go:87] duration metric: took 270.075773ms to configureAuth
	I1007 13:25:20.154327  797105 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:25:20.154508  797105 config.go:182] Loaded profile config "old-k8s-version-120978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 13:25:20.154588  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:20.157378  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.157711  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.157756  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.157943  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:20.158155  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.158381  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.158552  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:20.158718  797105 main.go:141] libmachine: Using SSH client type: native
	I1007 13:25:20.158953  797105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:25:20.158975  797105 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:25:20.398268  797105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:25:20.398299  797105 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:25:20.398309  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetURL
	I1007 13:25:20.399732  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | Using libvirt version 6000000
	I1007 13:25:20.402626  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.403066  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.403100  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.403308  797105 main.go:141] libmachine: Docker is up and running!
	I1007 13:25:20.403322  797105 main.go:141] libmachine: Reticulating splines...
	I1007 13:25:20.403330  797105 client.go:171] duration metric: took 25.10888517s to LocalClient.Create
	I1007 13:25:20.403356  797105 start.go:167] duration metric: took 25.108956612s to libmachine.API.Create "old-k8s-version-120978"
	I1007 13:25:20.403371  797105 start.go:293] postStartSetup for "old-k8s-version-120978" (driver="kvm2")
	I1007 13:25:20.403385  797105 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:25:20.403410  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:20.403662  797105 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:25:20.403706  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:20.406412  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.406837  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.406880  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.407067  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:20.407261  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.407404  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:20.407520  797105 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:25:20.497713  797105 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:25:20.502470  797105 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:25:20.502501  797105 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:25:20.502562  797105 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:25:20.502633  797105 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:25:20.502750  797105 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:25:20.513118  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:25:20.542473  797105 start.go:296] duration metric: took 139.08345ms for postStartSetup
	I1007 13:25:20.542543  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetConfigRaw
	I1007 13:25:20.543297  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:25:20.546496  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.546881  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.546901  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.547146  797105 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/config.json ...
	I1007 13:25:20.547401  797105 start.go:128] duration metric: took 25.279639618s to createHost
	I1007 13:25:20.547437  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:20.550907  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.551216  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.551241  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.551456  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:20.551673  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.551823  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.552026  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:20.552258  797105 main.go:141] libmachine: Using SSH client type: native
	I1007 13:25:20.552490  797105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:25:20.552505  797105 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:25:20.667042  797105 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728307520.643101650
	
	I1007 13:25:20.667075  797105 fix.go:216] guest clock: 1728307520.643101650
	I1007 13:25:20.667087  797105 fix.go:229] Guest: 2024-10-07 13:25:20.64310165 +0000 UTC Remote: 2024-10-07 13:25:20.547418006 +0000 UTC m=+27.977879401 (delta=95.683644ms)
	I1007 13:25:20.667120  797105 fix.go:200] guest clock delta is within tolerance: 95.683644ms
	I1007 13:25:20.667128  797105 start.go:83] releasing machines lock for "old-k8s-version-120978", held for 25.399520492s
	I1007 13:25:20.667165  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:20.667501  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:25:20.669998  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.670400  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.670433  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.670567  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:20.671076  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:20.671228  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:25:20.671308  797105 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:25:20.671374  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:20.671444  797105 ssh_runner.go:195] Run: cat /version.json
	I1007 13:25:20.671472  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:25:20.674044  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.674297  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.674432  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.674461  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.674596  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:20.674683  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:20.674704  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:20.674755  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.674862  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:25:20.674938  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:20.675006  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:25:20.675074  797105 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:25:20.675128  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:25:20.675250  797105 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:25:20.785033  797105 ssh_runner.go:195] Run: systemctl --version
	I1007 13:25:20.791941  797105 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:25:20.960373  797105 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:25:20.970711  797105 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:25:20.970807  797105 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:25:20.989645  797105 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:25:20.989687  797105 start.go:495] detecting cgroup driver to use...
	I1007 13:25:20.989776  797105 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:25:21.007473  797105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:25:21.022959  797105 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:25:21.023053  797105 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:25:21.040863  797105 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:25:21.056444  797105 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:25:21.181241  797105 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:25:21.337062  797105 docker.go:233] disabling docker service ...
	I1007 13:25:21.337144  797105 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:25:21.354108  797105 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:25:21.370616  797105 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:25:21.519665  797105 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:25:21.663657  797105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:25:21.680551  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:25:21.704584  797105 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1007 13:25:21.704687  797105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:25:21.717152  797105 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:25:21.717220  797105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:25:21.729376  797105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:25:21.741368  797105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:25:21.753388  797105 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:25:21.766349  797105 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:25:21.777943  797105 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:25:21.778041  797105 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:25:21.792757  797105 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:25:21.804121  797105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:25:21.946273  797105 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:25:22.055287  797105 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:25:22.055357  797105 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:25:22.060433  797105 start.go:563] Will wait 60s for crictl version
	I1007 13:25:22.060500  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:22.064857  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:25:22.112819  797105 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:25:22.112895  797105 ssh_runner.go:195] Run: crio --version
	I1007 13:25:22.143949  797105 ssh_runner.go:195] Run: crio --version
	I1007 13:25:22.177485  797105 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1007 13:25:22.178797  797105 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:25:22.181907  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:22.182341  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:25:10 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:25:22.182371  797105 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:25:22.182659  797105 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1007 13:25:22.188546  797105 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:25:22.206752  797105 kubeadm.go:883] updating cluster {Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.103 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:25:22.206874  797105 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:25:22.206934  797105 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:25:22.244598  797105 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:25:22.244691  797105 ssh_runner.go:195] Run: which lz4
	I1007 13:25:22.249611  797105 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:25:22.254349  797105 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:25:22.254392  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1007 13:25:24.085356  797105 crio.go:462] duration metric: took 1.835768913s to copy over tarball
	I1007 13:25:24.085461  797105 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:25:26.793229  797105 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707724009s)
	I1007 13:25:26.793263  797105 crio.go:469] duration metric: took 2.707868686s to extract the tarball
	I1007 13:25:26.793271  797105 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:25:26.840535  797105 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:25:26.888174  797105 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:25:26.888207  797105 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 13:25:26.888285  797105 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:25:26.888317  797105 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:26.888331  797105 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:26.888375  797105 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:26.888411  797105 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:26.888461  797105 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:26.888384  797105 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1007 13:25:26.888478  797105 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1007 13:25:26.890360  797105 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:26.890380  797105 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1007 13:25:26.890392  797105 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:26.890359  797105 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:26.890426  797105 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1007 13:25:26.890417  797105 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:26.890512  797105 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:25:26.890494  797105 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:27.049461  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:27.051418  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:27.054443  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:27.059486  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1007 13:25:27.063181  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:27.083037  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1007 13:25:27.085813  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:27.182599  797105 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1007 13:25:27.182654  797105 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:27.182709  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.229251  797105 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1007 13:25:27.229304  797105 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:27.229356  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.252058  797105 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1007 13:25:27.252108  797105 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:27.252158  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.256515  797105 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1007 13:25:27.256588  797105 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1007 13:25:27.256612  797105 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:27.256627  797105 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1007 13:25:27.256683  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.256683  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.274159  797105 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1007 13:25:27.274208  797105 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1007 13:25:27.274262  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.277604  797105 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1007 13:25:27.277658  797105 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:27.277699  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:27.277747  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:27.277703  797105 ssh_runner.go:195] Run: which crictl
	I1007 13:25:27.277819  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:27.277853  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:25:27.277896  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:27.283617  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:27.283636  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:25:27.430909  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:27.466933  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:27.467022  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:27.467078  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:27.467123  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:25:27.467163  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:25:27.467230  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:27.516493  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:25:27.631329  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:25:27.631429  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:25:27.631509  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:25:27.645284  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:25:27.645357  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:25:27.645411  797105 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:25:27.645436  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1007 13:25:27.722384  797105 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:25:27.754406  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1007 13:25:27.754478  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1007 13:25:27.776536  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1007 13:25:27.807144  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1007 13:25:27.812829  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1007 13:25:27.812970  797105 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1007 13:25:27.928204  797105 cache_images.go:92] duration metric: took 1.039973666s to LoadCachedImages
	W1007 13:25:27.928364  797105 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1007 13:25:27.928388  797105 kubeadm.go:934] updating node { 192.168.83.103 8443 v1.20.0 crio true true} ...
	I1007 13:25:27.928545  797105 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-120978 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:25:27.928658  797105 ssh_runner.go:195] Run: crio config
	I1007 13:25:27.985842  797105 cni.go:84] Creating CNI manager for ""
	I1007 13:25:27.985870  797105 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:25:27.985881  797105 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:25:27.985901  797105 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.103 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-120978 NodeName:old-k8s-version-120978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 13:25:27.986122  797105 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-120978"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:25:27.986220  797105 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 13:25:27.997878  797105 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:25:27.998001  797105 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:25:28.010472  797105 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1007 13:25:28.031384  797105 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:25:28.051710  797105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1007 13:25:28.070997  797105 ssh_runner.go:195] Run: grep 192.168.83.103	control-plane.minikube.internal$ /etc/hosts
	I1007 13:25:28.075383  797105 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:25:28.089494  797105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:25:28.223784  797105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:25:28.243231  797105 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978 for IP: 192.168.83.103
	I1007 13:25:28.243256  797105 certs.go:194] generating shared ca certs ...
	I1007 13:25:28.243274  797105 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.243449  797105 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:25:28.243525  797105 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:25:28.243546  797105 certs.go:256] generating profile certs ...
	I1007 13:25:28.243628  797105 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.key
	I1007 13:25:28.243649  797105 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt with IP's: []
	I1007 13:25:28.342153  797105 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt ...
	I1007 13:25:28.342191  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: {Name:mk67def35a48a6a1e1d2fdbf9038d56b0e95f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.342381  797105 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.key ...
	I1007 13:25:28.342395  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.key: {Name:mkd1e0f29912cd776b72b7bae0a71f261cc87ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.342469  797105 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key.a8838b3f
	I1007 13:25:28.342487  797105 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt.a8838b3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.103]
	I1007 13:25:28.546414  797105 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt.a8838b3f ...
	I1007 13:25:28.546453  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt.a8838b3f: {Name:mke0b5a77cfef9a56e2e3729edae30353976640f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.546645  797105 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key.a8838b3f ...
	I1007 13:25:28.546660  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key.a8838b3f: {Name:mk10bfa7921d908f48dad8b2e5535a2a7304c33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.546738  797105 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt.a8838b3f -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt
	I1007 13:25:28.546837  797105 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key.a8838b3f -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key
	I1007 13:25:28.546899  797105 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.key
	I1007 13:25:28.546916  797105 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.crt with IP's: []
	I1007 13:25:28.951913  797105 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.crt ...
	I1007 13:25:28.951956  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.crt: {Name:mk0231704ae50044c7d95bd808c60680016c9622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.952147  797105 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.key ...
	I1007 13:25:28.952159  797105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.key: {Name:mke8336f0a751fc3f5afb502bd103201196b15b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:25:28.952329  797105 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:25:28.952370  797105 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:25:28.952381  797105 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:25:28.952404  797105 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:25:28.952429  797105 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:25:28.952448  797105 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:25:28.952484  797105 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:25:28.953126  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:25:28.983689  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:25:29.011260  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:25:29.041275  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:25:29.073684  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 13:25:29.105469  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:25:29.145148  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:25:29.197418  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:25:29.228044  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:25:29.257101  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:25:29.286807  797105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:25:29.317892  797105 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:25:29.341135  797105 ssh_runner.go:195] Run: openssl version
	I1007 13:25:29.349730  797105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:25:29.362742  797105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:25:29.369348  797105 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:25:29.369430  797105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:25:29.376338  797105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:25:29.392660  797105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:25:29.404955  797105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:25:29.411811  797105 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:25:29.411902  797105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:25:29.420295  797105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:25:29.433159  797105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:25:29.446109  797105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:25:29.451498  797105 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:25:29.451563  797105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:25:29.458407  797105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:25:29.470506  797105 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:25:29.475752  797105 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:25:29.475832  797105 kubeadm.go:392] StartCluster: {Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.103 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:25:29.475927  797105 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:25:29.475980  797105 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:25:29.528507  797105 cri.go:89] found id: ""
	I1007 13:25:29.528604  797105 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:25:29.539676  797105 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:25:29.551394  797105 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:25:29.562746  797105 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:25:29.562782  797105 kubeadm.go:157] found existing configuration files:
	
	I1007 13:25:29.562841  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:25:29.573358  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:25:29.573433  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:25:29.585074  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:25:29.595159  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:25:29.595234  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:25:29.606043  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:25:29.620548  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:25:29.620638  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:25:29.642150  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:25:29.655870  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:25:29.655950  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:25:29.666480  797105 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:25:29.803776  797105 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:25:29.804157  797105 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:25:29.979891  797105 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:25:29.980080  797105 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:25:29.980230  797105 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:25:30.181171  797105 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:25:30.183445  797105 out.go:235]   - Generating certificates and keys ...
	I1007 13:25:30.183572  797105 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:25:30.183660  797105 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:25:30.326485  797105 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:25:30.437611  797105 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:25:30.629741  797105 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:25:30.845809  797105 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:25:31.077853  797105 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:25:31.078195  797105 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-120978] and IPs [192.168.83.103 127.0.0.1 ::1]
	I1007 13:25:31.153231  797105 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:25:31.153436  797105 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-120978] and IPs [192.168.83.103 127.0.0.1 ::1]
	I1007 13:25:31.301729  797105 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:25:31.639370  797105 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:25:31.849964  797105 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:25:31.850115  797105 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:25:31.992702  797105 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:25:32.095805  797105 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:25:32.301540  797105 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:25:32.466207  797105 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:25:32.482534  797105 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:25:32.485311  797105 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:25:32.485397  797105 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:25:32.617113  797105 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:25:32.618686  797105 out.go:235]   - Booting up control plane ...
	I1007 13:25:32.618838  797105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:25:32.624437  797105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:25:32.625360  797105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:25:32.626345  797105 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:25:32.632644  797105 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:26:12.626227  797105 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:26:12.626908  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:26:12.627257  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:26:17.627210  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:26:17.627467  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:26:27.626780  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:26:27.627054  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:26:47.626842  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:26:47.627154  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:27:27.628004  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:27:27.628266  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:27:27.628274  797105 kubeadm.go:310] 
	I1007 13:27:27.628356  797105 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:27:27.628418  797105 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:27:27.628430  797105 kubeadm.go:310] 
	I1007 13:27:27.628475  797105 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:27:27.628526  797105 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:27:27.628660  797105 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:27:27.628670  797105 kubeadm.go:310] 
	I1007 13:27:27.628795  797105 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:27:27.628840  797105 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:27:27.628886  797105 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:27:27.628894  797105 kubeadm.go:310] 
	I1007 13:27:27.629020  797105 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:27:27.629122  797105 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:27:27.629133  797105 kubeadm.go:310] 
	I1007 13:27:27.629268  797105 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:27:27.629369  797105 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:27:27.629460  797105 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:27:27.629550  797105 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:27:27.629560  797105 kubeadm.go:310] 
	I1007 13:27:27.630828  797105 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:27:27.630956  797105 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:27:27.631075  797105 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:27:27.631278  797105 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-120978] and IPs [192.168.83.103 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-120978] and IPs [192.168.83.103 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-120978] and IPs [192.168.83.103 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-120978] and IPs [192.168.83.103 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:27:27.631361  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:27:28.651403  797105 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.020012271s)
	I1007 13:27:28.651501  797105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:27:28.672584  797105 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:27:28.688135  797105 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:27:28.688163  797105 kubeadm.go:157] found existing configuration files:
	
	I1007 13:27:28.688225  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:27:28.702420  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:27:28.702505  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:27:28.716980  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:27:28.730678  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:27:28.730764  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:27:28.741651  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:27:28.753223  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:27:28.753296  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:27:28.765254  797105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:27:28.776272  797105 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:27:28.776346  797105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:27:28.787096  797105 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:27:29.064658  797105 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:29:25.283870  797105 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:29:25.284032  797105 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:29:25.285832  797105 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:29:25.285895  797105 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:29:25.285987  797105 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:29:25.286106  797105 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:29:25.286213  797105 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:29:25.286332  797105 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:29:25.288238  797105 out.go:235]   - Generating certificates and keys ...
	I1007 13:29:25.288334  797105 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:29:25.288396  797105 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:29:25.288469  797105 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:29:25.288539  797105 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:29:25.288622  797105 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:29:25.288688  797105 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:29:25.288799  797105 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:29:25.288880  797105 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:29:25.288973  797105 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:29:25.289066  797105 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:29:25.289114  797105 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:29:25.289182  797105 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:29:25.289288  797105 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:29:25.289343  797105 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:29:25.289404  797105 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:29:25.289458  797105 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:29:25.289572  797105 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:29:25.289687  797105 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:29:25.289742  797105 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:29:25.289840  797105 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:29:25.291657  797105 out.go:235]   - Booting up control plane ...
	I1007 13:29:25.291751  797105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:29:25.291841  797105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:29:25.291967  797105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:29:25.292052  797105 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:29:25.292189  797105 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:29:25.292279  797105 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:29:25.292376  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:29:25.292568  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:29:25.292642  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:29:25.292818  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:29:25.292887  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:29:25.293056  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:29:25.293114  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:29:25.293302  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:29:25.293383  797105 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:29:25.293549  797105 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:29:25.293558  797105 kubeadm.go:310] 
	I1007 13:29:25.293597  797105 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:29:25.293630  797105 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:29:25.293637  797105 kubeadm.go:310] 
	I1007 13:29:25.293699  797105 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:29:25.293753  797105 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:29:25.293844  797105 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:29:25.293851  797105 kubeadm.go:310] 
	I1007 13:29:25.293939  797105 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:29:25.293971  797105 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:29:25.294008  797105 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:29:25.294012  797105 kubeadm.go:310] 
	I1007 13:29:25.294121  797105 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:29:25.294196  797105 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:29:25.294206  797105 kubeadm.go:310] 
	I1007 13:29:25.294333  797105 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:29:25.294436  797105 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:29:25.294506  797105 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:29:25.294561  797105 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:29:25.294621  797105 kubeadm.go:310] 
	I1007 13:29:25.294659  797105 kubeadm.go:394] duration metric: took 3m55.818835447s to StartCluster
	I1007 13:29:25.294706  797105 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:29:25.294790  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:29:25.346190  797105 cri.go:89] found id: ""
	I1007 13:29:25.346248  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.346261  797105 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:29:25.346271  797105 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:29:25.346343  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:29:25.384269  797105 cri.go:89] found id: ""
	I1007 13:29:25.384304  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.384316  797105 logs.go:284] No container was found matching "etcd"
	I1007 13:29:25.384325  797105 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:29:25.384393  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:29:25.425726  797105 cri.go:89] found id: ""
	I1007 13:29:25.425759  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.425771  797105 logs.go:284] No container was found matching "coredns"
	I1007 13:29:25.425780  797105 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:29:25.425856  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:29:25.461915  797105 cri.go:89] found id: ""
	I1007 13:29:25.461957  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.461970  797105 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:29:25.461980  797105 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:29:25.462073  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:29:25.498942  797105 cri.go:89] found id: ""
	I1007 13:29:25.498976  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.498985  797105 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:29:25.498992  797105 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:29:25.499064  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:29:25.535281  797105 cri.go:89] found id: ""
	I1007 13:29:25.535316  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.535329  797105 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:29:25.535338  797105 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:29:25.535408  797105 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:29:25.576226  797105 cri.go:89] found id: ""
	I1007 13:29:25.576256  797105 logs.go:282] 0 containers: []
	W1007 13:29:25.576266  797105 logs.go:284] No container was found matching "kindnet"
	I1007 13:29:25.576277  797105 logs.go:123] Gathering logs for container status ...
	I1007 13:29:25.576294  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:29:25.617662  797105 logs.go:123] Gathering logs for kubelet ...
	I1007 13:29:25.617702  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:29:25.670039  797105 logs.go:123] Gathering logs for dmesg ...
	I1007 13:29:25.670091  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:29:25.684512  797105 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:29:25.684552  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:29:25.849145  797105 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:29:25.849171  797105 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:29:25.849188  797105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:29:25.954764  797105 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:29:25.954850  797105 out.go:270] * 
	* 
	W1007 13:29:25.954913  797105 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:29:25.954925  797105 out.go:270] * 
	* 
	W1007 13:29:25.955730  797105 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:29:25.959018  797105 out.go:201] 
	W1007 13:29:25.960344  797105 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:29:25.960394  797105 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:29:25.960427  797105 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:29:25.962253  797105 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-120978 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 6 (244.736031ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:26.258867  799673 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-120978" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-120978" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (273.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-016701 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-016701 --alsologtostderr -v=3: exit status 82 (2m0.590251892s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-016701"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:27:28.195233  799067 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:27:28.195543  799067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:27:28.195553  799067 out.go:358] Setting ErrFile to fd 2...
	I1007 13:27:28.195560  799067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:27:28.195848  799067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:27:28.196179  799067 out.go:352] Setting JSON to false
	I1007 13:27:28.196283  799067 mustload.go:65] Loading cluster: no-preload-016701
	I1007 13:27:28.196834  799067 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:27:28.196933  799067 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/config.json ...
	I1007 13:27:28.197142  799067 mustload.go:65] Loading cluster: no-preload-016701
	I1007 13:27:28.197286  799067 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:27:28.197321  799067 stop.go:39] StopHost: no-preload-016701
	I1007 13:27:28.197963  799067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:27:28.198020  799067 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:27:28.213969  799067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I1007 13:27:28.214577  799067 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:27:28.215180  799067 main.go:141] libmachine: Using API Version  1
	I1007 13:27:28.215203  799067 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:27:28.215579  799067 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:27:28.218142  799067 out.go:177] * Stopping node "no-preload-016701"  ...
	I1007 13:27:28.219467  799067 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 13:27:28.219524  799067 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:27:28.219849  799067 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 13:27:28.219883  799067 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:27:28.222980  799067 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:27:28.223417  799067 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:25:54 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:27:28.223451  799067 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:27:28.223609  799067 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:27:28.223817  799067 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:27:28.223968  799067 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:27:28.224095  799067 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:27:28.353274  799067 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 13:27:28.430463  799067 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 13:27:28.498372  799067 main.go:141] libmachine: Stopping "no-preload-016701"...
	I1007 13:27:28.498439  799067 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:27:28.500460  799067 main.go:141] libmachine: (no-preload-016701) Calling .Stop
	I1007 13:27:28.504215  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 0/120
	I1007 13:27:29.505902  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 1/120
	I1007 13:27:30.508308  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 2/120
	I1007 13:27:31.510057  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 3/120
	I1007 13:27:32.511426  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 4/120
	I1007 13:27:33.513965  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 5/120
	I1007 13:27:34.515637  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 6/120
	I1007 13:27:35.516986  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 7/120
	I1007 13:27:36.518755  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 8/120
	I1007 13:27:37.521412  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 9/120
	I1007 13:27:38.522906  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 10/120
	I1007 13:27:39.524843  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 11/120
	I1007 13:27:40.526410  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 12/120
	I1007 13:27:41.527895  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 13/120
	I1007 13:27:42.529190  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 14/120
	I1007 13:27:43.531707  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 15/120
	I1007 13:27:44.533339  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 16/120
	I1007 13:27:45.534705  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 17/120
	I1007 13:27:46.536385  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 18/120
	I1007 13:27:47.537783  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 19/120
	I1007 13:27:48.539683  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 20/120
	I1007 13:27:49.541271  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 21/120
	I1007 13:27:50.542853  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 22/120
	I1007 13:27:51.544509  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 23/120
	I1007 13:27:52.546456  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 24/120
	I1007 13:27:53.548949  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 25/120
	I1007 13:27:54.550449  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 26/120
	I1007 13:27:55.552577  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 27/120
	I1007 13:27:56.555138  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 28/120
	I1007 13:27:57.556537  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 29/120
	I1007 13:27:58.558204  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 30/120
	I1007 13:27:59.559628  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 31/120
	I1007 13:28:00.561175  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 32/120
	I1007 13:28:01.562605  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 33/120
	I1007 13:28:02.564049  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 34/120
	I1007 13:28:03.566368  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 35/120
	I1007 13:28:04.567975  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 36/120
	I1007 13:28:05.569590  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 37/120
	I1007 13:28:06.571009  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 38/120
	I1007 13:28:07.572758  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 39/120
	I1007 13:28:08.574955  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 40/120
	I1007 13:28:09.576611  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 41/120
	I1007 13:28:10.578409  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 42/120
	I1007 13:28:11.580526  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 43/120
	I1007 13:28:12.581999  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 44/120
	I1007 13:28:13.584219  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 45/120
	I1007 13:28:14.585808  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 46/120
	I1007 13:28:15.587541  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 47/120
	I1007 13:28:16.589010  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 48/120
	I1007 13:28:17.590620  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 49/120
	I1007 13:28:18.592136  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 50/120
	I1007 13:28:19.594380  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 51/120
	I1007 13:28:20.596829  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 52/120
	I1007 13:28:21.598355  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 53/120
	I1007 13:28:22.600771  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 54/120
	I1007 13:28:23.604733  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 55/120
	I1007 13:28:24.606126  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 56/120
	I1007 13:28:25.607705  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 57/120
	I1007 13:28:26.609196  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 58/120
	I1007 13:28:27.610738  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 59/120
	I1007 13:28:28.612931  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 60/120
	I1007 13:28:29.614869  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 61/120
	I1007 13:28:30.616751  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 62/120
	I1007 13:28:31.618186  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 63/120
	I1007 13:28:32.619643  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 64/120
	I1007 13:28:33.621812  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 65/120
	I1007 13:28:34.623382  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 66/120
	I1007 13:28:35.625678  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 67/120
	I1007 13:28:36.627109  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 68/120
	I1007 13:28:37.628874  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 69/120
	I1007 13:28:38.631519  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 70/120
	I1007 13:28:39.633201  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 71/120
	I1007 13:28:40.634734  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 72/120
	I1007 13:28:41.636287  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 73/120
	I1007 13:28:42.638066  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 74/120
	I1007 13:28:43.640208  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 75/120
	I1007 13:28:44.641753  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 76/120
	I1007 13:28:45.643275  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 77/120
	I1007 13:28:46.644950  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 78/120
	I1007 13:28:47.646483  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 79/120
	I1007 13:28:48.648152  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 80/120
	I1007 13:28:49.649462  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 81/120
	I1007 13:28:50.651820  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 82/120
	I1007 13:28:51.653079  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 83/120
	I1007 13:28:52.655230  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 84/120
	I1007 13:28:53.657269  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 85/120
	I1007 13:28:54.658833  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 86/120
	I1007 13:28:55.660160  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 87/120
	I1007 13:28:56.661885  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 88/120
	I1007 13:28:57.663333  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 89/120
	I1007 13:28:58.665889  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 90/120
	I1007 13:28:59.667827  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 91/120
	I1007 13:29:00.669432  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 92/120
	I1007 13:29:01.671178  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 93/120
	I1007 13:29:02.673033  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 94/120
	I1007 13:29:03.675028  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 95/120
	I1007 13:29:04.677089  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 96/120
	I1007 13:29:05.678861  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 97/120
	I1007 13:29:06.680822  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 98/120
	I1007 13:29:07.682211  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 99/120
	I1007 13:29:08.683748  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 100/120
	I1007 13:29:09.685490  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 101/120
	I1007 13:29:10.687121  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 102/120
	I1007 13:29:11.688748  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 103/120
	I1007 13:29:12.690322  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 104/120
	I1007 13:29:13.692497  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 105/120
	I1007 13:29:14.693932  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 106/120
	I1007 13:29:15.695376  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 107/120
	I1007 13:29:16.696799  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 108/120
	I1007 13:29:17.698432  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 109/120
	I1007 13:29:18.699954  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 110/120
	I1007 13:29:19.701480  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 111/120
	I1007 13:29:20.703095  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 112/120
	I1007 13:29:21.704542  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 113/120
	I1007 13:29:22.706187  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 114/120
	I1007 13:29:23.708622  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 115/120
	I1007 13:29:24.709983  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 116/120
	I1007 13:29:25.711823  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 117/120
	I1007 13:29:26.713291  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 118/120
	I1007 13:29:27.715086  799067 main.go:141] libmachine: (no-preload-016701) Waiting for machine to stop 119/120
	I1007 13:29:28.715989  799067 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 13:29:28.716066  799067 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 13:29:28.718015  799067 out.go:201] 
	W1007 13:29:28.719487  799067 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 13:29:28.719511  799067 out.go:270] * 
	* 
	W1007 13:29:28.723619  799067 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:29:28.725054  799067 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-016701 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701: exit status 3 (18.651749608s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:47.378383  799802 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E1007 13:29:47.378405  799802 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-016701" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-653322 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-653322 --alsologtostderr -v=3: exit status 82 (2m0.553575322s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-653322"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:27:40.492702  799219 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:27:40.492972  799219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:27:40.492980  799219 out.go:358] Setting ErrFile to fd 2...
	I1007 13:27:40.492985  799219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:27:40.493204  799219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:27:40.493448  799219 out.go:352] Setting JSON to false
	I1007 13:27:40.493528  799219 mustload.go:65] Loading cluster: embed-certs-653322
	I1007 13:27:40.493952  799219 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:27:40.494049  799219 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/embed-certs-653322/config.json ...
	I1007 13:27:40.494242  799219 mustload.go:65] Loading cluster: embed-certs-653322
	I1007 13:27:40.494360  799219 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:27:40.494397  799219 stop.go:39] StopHost: embed-certs-653322
	I1007 13:27:40.494803  799219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:27:40.494849  799219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:27:40.510405  799219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
	I1007 13:27:40.511015  799219 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:27:40.511781  799219 main.go:141] libmachine: Using API Version  1
	I1007 13:27:40.511811  799219 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:27:40.512223  799219 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:27:40.514822  799219 out.go:177] * Stopping node "embed-certs-653322"  ...
	I1007 13:27:40.516019  799219 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 13:27:40.516081  799219 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:27:40.516408  799219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 13:27:40.516447  799219 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:27:40.519516  799219 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:27:40.519890  799219 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:26:48 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:27:40.519935  799219 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:27:40.520237  799219 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:27:40.520445  799219 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:27:40.520640  799219 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:27:40.520834  799219 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:27:40.638676  799219 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 13:27:40.701092  799219 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 13:27:40.769027  799219 main.go:141] libmachine: Stopping "embed-certs-653322"...
	I1007 13:27:40.769078  799219 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:27:40.770901  799219 main.go:141] libmachine: (embed-certs-653322) Calling .Stop
	I1007 13:27:40.774766  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 0/120
	I1007 13:27:41.776235  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 1/120
	I1007 13:27:42.777723  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 2/120
	I1007 13:27:43.779347  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 3/120
	I1007 13:27:44.780908  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 4/120
	I1007 13:27:45.783401  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 5/120
	I1007 13:27:46.785646  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 6/120
	I1007 13:27:47.787212  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 7/120
	I1007 13:27:48.789476  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 8/120
	I1007 13:27:49.790965  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 9/120
	I1007 13:27:50.793276  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 10/120
	I1007 13:27:51.795009  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 11/120
	I1007 13:27:52.796731  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 12/120
	I1007 13:27:53.798699  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 13/120
	I1007 13:27:54.800537  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 14/120
	I1007 13:27:55.803260  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 15/120
	I1007 13:27:56.805129  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 16/120
	I1007 13:27:57.806584  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 17/120
	I1007 13:27:58.808368  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 18/120
	I1007 13:27:59.809967  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 19/120
	I1007 13:28:00.811892  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 20/120
	I1007 13:28:01.813396  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 21/120
	I1007 13:28:02.814993  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 22/120
	I1007 13:28:03.816453  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 23/120
	I1007 13:28:04.817999  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 24/120
	I1007 13:28:05.820372  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 25/120
	I1007 13:28:06.821720  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 26/120
	I1007 13:28:07.823563  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 27/120
	I1007 13:28:08.825298  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 28/120
	I1007 13:28:09.826920  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 29/120
	I1007 13:28:10.828645  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 30/120
	I1007 13:28:11.830485  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 31/120
	I1007 13:28:12.832120  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 32/120
	I1007 13:28:13.833824  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 33/120
	I1007 13:28:14.835264  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 34/120
	I1007 13:28:15.837645  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 35/120
	I1007 13:28:16.839173  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 36/120
	I1007 13:28:17.840754  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 37/120
	I1007 13:28:18.842088  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 38/120
	I1007 13:28:19.843889  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 39/120
	I1007 13:28:20.845236  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 40/120
	I1007 13:28:21.846938  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 41/120
	I1007 13:28:22.848537  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 42/120
	I1007 13:28:23.850216  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 43/120
	I1007 13:28:24.851821  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 44/120
	I1007 13:28:25.854187  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 45/120
	I1007 13:28:26.855985  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 46/120
	I1007 13:28:27.857499  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 47/120
	I1007 13:28:28.859247  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 48/120
	I1007 13:28:29.861373  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 49/120
	I1007 13:28:30.863667  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 50/120
	I1007 13:28:31.865079  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 51/120
	I1007 13:28:32.866617  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 52/120
	I1007 13:28:33.868358  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 53/120
	I1007 13:28:34.870191  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 54/120
	I1007 13:28:35.872674  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 55/120
	I1007 13:28:36.874194  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 56/120
	I1007 13:28:37.875639  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 57/120
	I1007 13:28:38.877333  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 58/120
	I1007 13:28:39.878730  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 59/120
	I1007 13:28:40.880526  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 60/120
	I1007 13:28:41.882144  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 61/120
	I1007 13:28:42.883879  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 62/120
	I1007 13:28:43.885457  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 63/120
	I1007 13:28:44.887622  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 64/120
	I1007 13:28:45.889704  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 65/120
	I1007 13:28:46.891292  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 66/120
	I1007 13:28:47.892861  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 67/120
	I1007 13:28:48.894450  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 68/120
	I1007 13:28:49.896758  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 69/120
	I1007 13:28:50.898434  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 70/120
	I1007 13:28:51.900153  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 71/120
	I1007 13:28:52.901463  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 72/120
	I1007 13:28:53.902859  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 73/120
	I1007 13:28:54.904535  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 74/120
	I1007 13:28:55.906562  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 75/120
	I1007 13:28:56.908155  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 76/120
	I1007 13:28:57.910427  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 77/120
	I1007 13:28:58.912655  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 78/120
	I1007 13:28:59.914101  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 79/120
	I1007 13:29:00.915693  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 80/120
	I1007 13:29:01.917265  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 81/120
	I1007 13:29:02.918613  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 82/120
	I1007 13:29:03.920149  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 83/120
	I1007 13:29:04.922812  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 84/120
	I1007 13:29:05.924847  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 85/120
	I1007 13:29:06.926649  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 86/120
	I1007 13:29:07.928164  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 87/120
	I1007 13:29:08.929534  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 88/120
	I1007 13:29:09.930964  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 89/120
	I1007 13:29:10.933582  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 90/120
	I1007 13:29:11.935375  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 91/120
	I1007 13:29:12.937144  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 92/120
	I1007 13:29:13.938877  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 93/120
	I1007 13:29:14.940317  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 94/120
	I1007 13:29:15.942588  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 95/120
	I1007 13:29:16.944135  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 96/120
	I1007 13:29:17.945583  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 97/120
	I1007 13:29:18.947161  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 98/120
	I1007 13:29:19.948686  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 99/120
	I1007 13:29:20.951071  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 100/120
	I1007 13:29:21.952709  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 101/120
	I1007 13:29:22.954243  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 102/120
	I1007 13:29:23.955840  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 103/120
	I1007 13:29:24.957544  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 104/120
	I1007 13:29:25.959705  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 105/120
	I1007 13:29:26.961151  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 106/120
	I1007 13:29:27.962698  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 107/120
	I1007 13:29:28.964201  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 108/120
	I1007 13:29:29.965839  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 109/120
	I1007 13:29:30.968292  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 110/120
	I1007 13:29:31.969728  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 111/120
	I1007 13:29:32.971136  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 112/120
	I1007 13:29:33.972515  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 113/120
	I1007 13:29:34.974301  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 114/120
	I1007 13:29:35.976448  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 115/120
	I1007 13:29:36.978606  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 116/120
	I1007 13:29:37.980952  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 117/120
	I1007 13:29:38.982607  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 118/120
	I1007 13:29:39.984233  799219 main.go:141] libmachine: (embed-certs-653322) Waiting for machine to stop 119/120
	I1007 13:29:40.984953  799219 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 13:29:40.985034  799219 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 13:29:40.987595  799219 out.go:201] 
	W1007 13:29:40.989393  799219 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 13:29:40.989420  799219 out.go:270] * 
	* 
	W1007 13:29:40.993366  799219 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:29:40.994873  799219 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-653322 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322: exit status 3 (18.669230476s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:59.666413  799880 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	E1007 13:29:59.666434  799880 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-653322" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-120978 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-120978 create -f testdata/busybox.yaml: exit status 1 (46.674339ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-120978" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-120978 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 6 (237.163688ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:26.543098  799713 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-120978" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-120978" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 6 (237.969137ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:26.781048  799743 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-120978" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-120978" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-120978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-120978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m42.22397365s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-120978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-120978 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-120978 describe deploy/metrics-server -n kube-system: exit status 1 (47.83865ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-120978" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-120978 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 6 (241.899773ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:31:09.295099  800669 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-120978" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-120978" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701: exit status 3 (3.199472489s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:50.578392  799925 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E1007 13:29:50.578413  799925 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-016701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1007 13:29:53.449355  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-016701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155078437s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-016701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701: exit status 3 (3.061091634s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:29:59.794590  800011 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E1007 13:29:59.794618  800011 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-016701" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322: exit status 3 (3.199815116s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:30:02.866460  800057 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	E1007 13:30:02.866483  800057 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-653322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-653322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154418068s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-653322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322: exit status 3 (3.061180099s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:30:12.082418  800164 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	E1007 13:30:12.082439  800164 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-653322" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (752.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-120978 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1007 13:34:53.449371  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:35:13.699049  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-120978 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m30.06401968s)

                                                
                                                
-- stdout --
	* [old-k8s-version-120978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-120978" primary control-plane node in "old-k8s-version-120978" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-120978" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:31:15.885202  800812 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:31:15.885344  800812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:31:15.885354  800812 out.go:358] Setting ErrFile to fd 2...
	I1007 13:31:15.885358  800812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:31:15.885541  800812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:31:15.886150  800812 out.go:352] Setting JSON to false
	I1007 13:31:15.887187  800812 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11625,"bootTime":1728296251,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:31:15.887313  800812 start.go:139] virtualization: kvm guest
	I1007 13:31:15.889836  800812 out.go:177] * [old-k8s-version-120978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:31:15.891067  800812 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:31:15.891096  800812 notify.go:220] Checking for updates...
	I1007 13:31:15.893713  800812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:31:15.894953  800812 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:31:15.896243  800812 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:31:15.897762  800812 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:31:15.899347  800812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:31:15.901246  800812 config.go:182] Loaded profile config "old-k8s-version-120978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 13:31:15.901690  800812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:31:15.901772  800812 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:31:15.917555  800812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46625
	I1007 13:31:15.918167  800812 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:31:15.918771  800812 main.go:141] libmachine: Using API Version  1
	I1007 13:31:15.918799  800812 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:31:15.919155  800812 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:31:15.919429  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:31:15.921498  800812 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 13:31:15.922839  800812 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:31:15.923200  800812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:31:15.923253  800812 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:31:15.938782  800812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I1007 13:31:15.939216  800812 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:31:15.939754  800812 main.go:141] libmachine: Using API Version  1
	I1007 13:31:15.939795  800812 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:31:15.940159  800812 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:31:15.940374  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:31:15.981078  800812 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:31:15.982797  800812 start.go:297] selected driver: kvm2
	I1007 13:31:15.982911  800812 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.103 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:31:15.983084  800812 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:31:15.983942  800812 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:31:15.984036  800812 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:31:16.001187  800812 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:31:16.001631  800812 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:31:16.001669  800812 cni.go:84] Creating CNI manager for ""
	I1007 13:31:16.001720  800812 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:31:16.001760  800812 start.go:340] cluster config:
	{Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.103 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:31:16.001879  800812 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:31:16.003878  800812 out.go:177] * Starting "old-k8s-version-120978" primary control-plane node in "old-k8s-version-120978" cluster
	I1007 13:31:16.005124  800812 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:31:16.005184  800812 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 13:31:16.005196  800812 cache.go:56] Caching tarball of preloaded images
	I1007 13:31:16.005288  800812 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:31:16.005304  800812 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1007 13:31:16.005432  800812 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/config.json ...
	I1007 13:31:16.005654  800812 start.go:360] acquireMachinesLock for old-k8s-version-120978: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:35:19.475773  800812 start.go:364] duration metric: took 4m3.470070639s to acquireMachinesLock for "old-k8s-version-120978"
	I1007 13:35:19.475847  800812 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:35:19.475853  800812 fix.go:54] fixHost starting: 
	I1007 13:35:19.476258  800812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:35:19.476322  800812 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:35:19.497484  800812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35803
	I1007 13:35:19.497950  800812 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:35:19.498544  800812 main.go:141] libmachine: Using API Version  1
	I1007 13:35:19.498572  800812 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:35:19.499072  800812 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:35:19.499465  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:19.499817  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetState
	I1007 13:35:19.501858  800812 fix.go:112] recreateIfNeeded on old-k8s-version-120978: state=Stopped err=<nil>
	I1007 13:35:19.501893  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	W1007 13:35:19.502115  800812 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:35:19.504658  800812 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-120978" ...
	I1007 13:35:19.506241  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .Start
	I1007 13:35:19.506502  800812 main.go:141] libmachine: (old-k8s-version-120978) Ensuring networks are active...
	I1007 13:35:19.507523  800812 main.go:141] libmachine: (old-k8s-version-120978) Ensuring network default is active
	I1007 13:35:19.507955  800812 main.go:141] libmachine: (old-k8s-version-120978) Ensuring network mk-old-k8s-version-120978 is active
	I1007 13:35:19.508530  800812 main.go:141] libmachine: (old-k8s-version-120978) Getting domain xml...
	I1007 13:35:19.509447  800812 main.go:141] libmachine: (old-k8s-version-120978) Creating domain...
	I1007 13:35:19.881526  800812 main.go:141] libmachine: (old-k8s-version-120978) Waiting to get IP...
	I1007 13:35:19.882561  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:19.882999  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:19.883065  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:19.882970  801840 retry.go:31] will retry after 298.092835ms: waiting for machine to come up
	I1007 13:35:20.182466  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:20.182928  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:20.182994  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:20.182910  801840 retry.go:31] will retry after 357.331005ms: waiting for machine to come up
	I1007 13:35:20.541503  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:20.542070  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:20.542096  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:20.542011  801840 retry.go:31] will retry after 366.509891ms: waiting for machine to come up
	I1007 13:35:20.910686  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:20.911190  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:20.911219  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:20.911153  801840 retry.go:31] will retry after 398.555998ms: waiting for machine to come up
	I1007 13:35:21.311908  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:21.312378  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:21.312410  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:21.312328  801840 retry.go:31] will retry after 526.778164ms: waiting for machine to come up
	I1007 13:35:21.840705  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:21.841223  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:21.841255  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:21.841183  801840 retry.go:31] will retry after 815.349039ms: waiting for machine to come up
	I1007 13:35:22.658260  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:22.658841  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:22.658871  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:22.658781  801840 retry.go:31] will retry after 1.066838486s: waiting for machine to come up
	I1007 13:35:23.727857  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:23.728422  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:23.728454  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:23.728369  801840 retry.go:31] will retry after 1.428486328s: waiting for machine to come up
	I1007 13:35:25.158511  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:25.159043  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:25.159085  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:25.158986  801840 retry.go:31] will retry after 1.152260622s: waiting for machine to come up
	I1007 13:35:26.312543  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:26.312982  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:26.313001  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:26.312914  801840 retry.go:31] will retry after 2.260609173s: waiting for machine to come up
	I1007 13:35:28.576176  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:28.576597  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:28.576625  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:28.576557  801840 retry.go:31] will retry after 2.632489704s: waiting for machine to come up
	I1007 13:35:31.211380  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:31.212066  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:31.212089  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:31.211945  801840 retry.go:31] will retry after 2.343383658s: waiting for machine to come up
	I1007 13:35:33.556868  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:33.557433  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | unable to find current IP address of domain old-k8s-version-120978 in network mk-old-k8s-version-120978
	I1007 13:35:33.557471  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | I1007 13:35:33.557407  801840 retry.go:31] will retry after 3.128687098s: waiting for machine to come up
	I1007 13:35:36.687404  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.687861  800812 main.go:141] libmachine: (old-k8s-version-120978) Found IP for machine: 192.168.83.103
	I1007 13:35:36.687906  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has current primary IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.687917  800812 main.go:141] libmachine: (old-k8s-version-120978) Reserving static IP address...
	I1007 13:35:36.688360  800812 main.go:141] libmachine: (old-k8s-version-120978) Reserved static IP address: 192.168.83.103
	I1007 13:35:36.688387  800812 main.go:141] libmachine: (old-k8s-version-120978) Waiting for SSH to be available...
	I1007 13:35:36.688409  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "old-k8s-version-120978", mac: "52:54:00:ce:bc:6d", ip: "192.168.83.103"} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:36.688438  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | skip adding static IP to network mk-old-k8s-version-120978 - found existing host DHCP lease matching {name: "old-k8s-version-120978", mac: "52:54:00:ce:bc:6d", ip: "192.168.83.103"}
	I1007 13:35:36.688451  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | Getting to WaitForSSH function...
	I1007 13:35:36.690409  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.690679  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:36.690706  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.690813  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | Using SSH client type: external
	I1007 13:35:36.690854  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa (-rw-------)
	I1007 13:35:36.690888  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:35:36.690900  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | About to run SSH command:
	I1007 13:35:36.690912  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | exit 0
	I1007 13:35:36.822905  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | SSH cmd err, output: <nil>: 
	I1007 13:35:36.823301  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetConfigRaw
	I1007 13:35:36.823999  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:35:36.826971  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.827454  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:36.827478  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.827824  800812 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/config.json ...
	I1007 13:35:36.828058  800812 machine.go:93] provisionDockerMachine start ...
	I1007 13:35:36.828079  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:36.828301  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:36.830686  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.831042  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:36.831077  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.831193  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:36.831413  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:36.831609  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:36.831773  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:36.831954  800812 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:36.832168  800812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:35:36.832181  800812 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:35:36.944221  800812 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:35:36.944262  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:35:36.944558  800812 buildroot.go:166] provisioning hostname "old-k8s-version-120978"
	I1007 13:35:36.944591  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:35:36.944799  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:36.947804  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.948284  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:36.948316  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:36.948625  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:36.948838  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:36.949016  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:36.949195  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:36.949395  800812 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:36.949641  800812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:35:36.949655  800812 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-120978 && echo "old-k8s-version-120978" | sudo tee /etc/hostname
	I1007 13:35:37.078783  800812 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-120978
	
	I1007 13:35:37.078821  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:37.082210  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.082621  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.082659  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.082873  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:37.083187  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.083392  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.083574  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:37.083782  800812 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:37.083973  800812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:35:37.083991  800812 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-120978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-120978/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-120978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:35:37.207482  800812 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:35:37.207535  800812 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:35:37.207569  800812 buildroot.go:174] setting up certificates
	I1007 13:35:37.207583  800812 provision.go:84] configureAuth start
	I1007 13:35:37.207597  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetMachineName
	I1007 13:35:37.207893  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:35:37.210769  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.211169  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.211215  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.211361  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:37.213917  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.214331  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.214361  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.214517  800812 provision.go:143] copyHostCerts
	I1007 13:35:37.214576  800812 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:35:37.214588  800812 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:35:37.214648  800812 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:35:37.214741  800812 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:35:37.214761  800812 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:35:37.214790  800812 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:35:37.214929  800812 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:35:37.214942  800812 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:35:37.214973  800812 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:35:37.215038  800812 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-120978 san=[127.0.0.1 192.168.83.103 localhost minikube old-k8s-version-120978]
	I1007 13:35:37.386012  800812 provision.go:177] copyRemoteCerts
	I1007 13:35:37.386123  800812 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:35:37.386158  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:37.389278  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.389570  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.389617  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.389789  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:37.390055  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.390233  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:37.390409  800812 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:35:37.476742  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1007 13:35:37.507333  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:35:37.537470  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:35:37.565630  800812 provision.go:87] duration metric: took 358.031911ms to configureAuth
	I1007 13:35:37.565665  800812 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:35:37.565858  800812 config.go:182] Loaded profile config "old-k8s-version-120978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 13:35:37.565936  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:37.568767  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.569209  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.569249  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.569440  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:37.569679  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.569842  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.570077  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:37.570267  800812 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:37.570527  800812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:35:37.570544  800812 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:35:37.824301  800812 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:35:37.824331  800812 machine.go:96] duration metric: took 996.259396ms to provisionDockerMachine
	I1007 13:35:37.824346  800812 start.go:293] postStartSetup for "old-k8s-version-120978" (driver="kvm2")
	I1007 13:35:37.824360  800812 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:35:37.824399  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:37.824824  800812 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:35:37.824875  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:37.828019  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.828438  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.828461  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.828700  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:37.828916  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.829107  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:37.829242  800812 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:35:37.914114  800812 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:35:37.919331  800812 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:35:37.919368  800812 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:35:37.919448  800812 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:35:37.919548  800812 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:35:37.919659  800812 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:35:37.931179  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:35:37.962904  800812 start.go:296] duration metric: took 138.54114ms for postStartSetup
	I1007 13:35:37.962961  800812 fix.go:56] duration metric: took 18.487106529s for fixHost
	I1007 13:35:37.962986  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:37.966322  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.966675  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:37.966725  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:37.966894  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:37.967129  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.967320  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:37.967482  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:37.967646  800812 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:37.967845  800812 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.83.103 22 <nil> <nil>}
	I1007 13:35:37.967856  800812 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:35:38.079598  800812 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728308138.047263014
	
	I1007 13:35:38.079627  800812 fix.go:216] guest clock: 1728308138.047263014
	I1007 13:35:38.079634  800812 fix.go:229] Guest: 2024-10-07 13:35:38.047263014 +0000 UTC Remote: 2024-10-07 13:35:37.962967323 +0000 UTC m=+262.118981569 (delta=84.295691ms)
	I1007 13:35:38.079660  800812 fix.go:200] guest clock delta is within tolerance: 84.295691ms
	I1007 13:35:38.079668  800812 start.go:83] releasing machines lock for "old-k8s-version-120978", held for 18.603839175s
	I1007 13:35:38.079715  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:38.080048  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:35:38.083200  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:38.083619  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:38.083651  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:38.083854  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:38.084509  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:38.084702  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .DriverName
	I1007 13:35:38.084791  800812 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:35:38.084833  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:38.084986  800812 ssh_runner.go:195] Run: cat /version.json
	I1007 13:35:38.085014  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHHostname
	I1007 13:35:38.087784  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:38.087818  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:38.088284  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:38.088323  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:38.088352  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:38.088374  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:38.088536  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:38.088650  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHPort
	I1007 13:35:38.088732  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:38.088922  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHKeyPath
	I1007 13:35:38.088932  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:38.089077  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetSSHUsername
	I1007 13:35:38.089093  800812 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:35:38.089251  800812 sshutil.go:53] new ssh client: &{IP:192.168.83.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/old-k8s-version-120978/id_rsa Username:docker}
	I1007 13:35:38.216010  800812 ssh_runner.go:195] Run: systemctl --version
	I1007 13:35:38.223104  800812 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:35:38.374822  800812 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:35:38.381673  800812 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:35:38.381777  800812 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:35:38.410060  800812 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:35:38.410089  800812 start.go:495] detecting cgroup driver to use...
	I1007 13:35:38.410169  800812 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:35:38.440114  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:35:38.457253  800812 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:35:38.457329  800812 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:35:38.472902  800812 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:35:38.489905  800812 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:35:38.616786  800812 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:35:38.800613  800812 docker.go:233] disabling docker service ...
	I1007 13:35:38.800692  800812 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:35:38.817465  800812 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:35:38.832031  800812 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:35:38.979355  800812 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:35:39.130784  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:35:39.150857  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:35:39.173697  800812 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1007 13:35:39.173793  800812 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:35:39.186205  800812 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:35:39.186268  800812 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:35:39.201907  800812 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:35:39.216658  800812 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:35:39.229155  800812 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:35:39.244688  800812 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:35:39.259260  800812 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:35:39.259323  800812 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:35:39.277677  800812 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:35:39.289570  800812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:35:39.414891  800812 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:35:39.525037  800812 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:35:39.525226  800812 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:35:39.531896  800812 start.go:563] Will wait 60s for crictl version
	I1007 13:35:39.531963  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:39.536398  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:35:39.584333  800812 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:35:39.584523  800812 ssh_runner.go:195] Run: crio --version
	I1007 13:35:39.625110  800812 ssh_runner.go:195] Run: crio --version
	I1007 13:35:39.665219  800812 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1007 13:35:39.666583  800812 main.go:141] libmachine: (old-k8s-version-120978) Calling .GetIP
	I1007 13:35:39.669847  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:39.670355  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:bc:6d", ip: ""} in network mk-old-k8s-version-120978: {Iface:virbr2 ExpiryTime:2024-10-07 14:35:30 +0000 UTC Type:0 Mac:52:54:00:ce:bc:6d Iaid: IPaddr:192.168.83.103 Prefix:24 Hostname:old-k8s-version-120978 Clientid:01:52:54:00:ce:bc:6d}
	I1007 13:35:39.670409  800812 main.go:141] libmachine: (old-k8s-version-120978) DBG | domain old-k8s-version-120978 has defined IP address 192.168.83.103 and MAC address 52:54:00:ce:bc:6d in network mk-old-k8s-version-120978
	I1007 13:35:39.670750  800812 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1007 13:35:39.676864  800812 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:35:39.695383  800812 kubeadm.go:883] updating cluster {Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.103 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:35:39.695541  800812 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:35:39.695600  800812 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:35:39.754858  800812 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:35:39.754930  800812 ssh_runner.go:195] Run: which lz4
	I1007 13:35:39.759540  800812 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:35:39.764432  800812 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:35:39.764472  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1007 13:35:41.759318  800812 crio.go:462] duration metric: took 1.999833348s to copy over tarball
	I1007 13:35:41.759458  800812 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:35:45.150069  800812 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.390538987s)
	I1007 13:35:45.150118  800812 crio.go:469] duration metric: took 3.390758594s to extract the tarball
	I1007 13:35:45.150129  800812 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:35:45.194798  800812 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:35:45.233204  800812 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:35:45.233237  800812 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 13:35:45.233330  800812 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:35:45.233393  800812 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.233425  800812 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1007 13:35:45.233437  800812 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:45.233373  800812 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.233399  800812 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.233494  800812 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.233557  800812 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.235669  800812 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:45.235691  800812 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.235753  800812 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.235771  800812 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:35:45.235795  800812 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.235668  800812 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.235832  800812 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.235819  800812 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1007 13:35:45.401587  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.401901  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.409792  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.410341  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.412741  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:45.415694  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.449736  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1007 13:35:45.554810  800812 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1007 13:35:45.554885  800812 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1007 13:35:45.554890  800812 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.554905  800812 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.554953  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.554953  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.554996  800812 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1007 13:35:45.555036  800812 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.555044  800812 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1007 13:35:45.555063  800812 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.555084  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.555094  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.604994  800812 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1007 13:35:45.605044  800812 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:45.605099  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.605138  800812 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1007 13:35:45.605180  800812 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.605223  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.611289  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.611320  800812 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1007 13:35:45.611363  800812 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1007 13:35:45.611394  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.611461  800812 ssh_runner.go:195] Run: which crictl
	I1007 13:35:45.611515  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.611534  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.612388  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.615645  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:45.721369  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.741752  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.745127  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:35:45.753277  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.753346  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.756844  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.759111  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:45.915230  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:35:45.916893  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:35:45.934050  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:35:45.934105  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:35:45.934050  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:35:45.934105  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:35:45.947618  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:35:46.062548  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1007 13:35:46.063447  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1007 13:35:46.081632  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1007 13:35:46.099306  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1007 13:35:46.099393  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1007 13:35:46.099441  800812 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:35:46.099487  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1007 13:35:46.105689  800812 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:35:46.144220  800812 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1007 13:35:46.264974  800812 cache_images.go:92] duration metric: took 1.031713523s to LoadCachedImages
	W1007 13:35:46.265102  800812 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/18424-747025/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1007 13:35:46.265123  800812 kubeadm.go:934] updating node { 192.168.83.103 8443 v1.20.0 crio true true} ...
	I1007 13:35:46.265243  800812 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-120978 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:35:46.265321  800812 ssh_runner.go:195] Run: crio config
	I1007 13:35:46.316013  800812 cni.go:84] Creating CNI manager for ""
	I1007 13:35:46.316044  800812 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:35:46.316056  800812 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:35:46.316087  800812 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.103 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-120978 NodeName:old-k8s-version-120978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 13:35:46.316294  800812 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-120978"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:35:46.316376  800812 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 13:35:46.329922  800812 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:35:46.330005  800812 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:35:46.342301  800812 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1007 13:35:46.366592  800812 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:35:46.390435  800812 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1007 13:35:46.409473  800812 ssh_runner.go:195] Run: grep 192.168.83.103	control-plane.minikube.internal$ /etc/hosts
	I1007 13:35:46.413857  800812 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:35:46.428033  800812 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:35:46.558760  800812 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:35:46.580638  800812 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978 for IP: 192.168.83.103
	I1007 13:35:46.580672  800812 certs.go:194] generating shared ca certs ...
	I1007 13:35:46.580702  800812 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:35:46.580895  800812 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:35:46.580960  800812 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:35:46.580978  800812 certs.go:256] generating profile certs ...
	I1007 13:35:46.581136  800812 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.key
	I1007 13:35:46.581222  800812 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key.a8838b3f
	I1007 13:35:46.581277  800812 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.key
	I1007 13:35:46.581445  800812 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:35:46.581493  800812 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:35:46.581505  800812 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:35:46.581537  800812 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:35:46.581567  800812 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:35:46.581600  800812 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:35:46.581655  800812 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:35:46.582609  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:35:46.623920  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:35:46.674337  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:35:46.721799  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:35:46.756255  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 13:35:46.789666  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:35:46.837147  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:35:46.881788  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:35:46.913794  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:35:46.942421  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:35:46.976071  800812 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:35:47.003499  800812 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:35:47.026628  800812 ssh_runner.go:195] Run: openssl version
	I1007 13:35:47.033475  800812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:35:47.046425  800812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:35:47.053417  800812 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:35:47.053501  800812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:35:47.060535  800812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:35:47.072621  800812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:35:47.085492  800812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:35:47.091382  800812 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:35:47.091474  800812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:35:47.098328  800812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:35:47.113987  800812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:35:47.127027  800812 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:35:47.132480  800812 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:35:47.132554  800812 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:35:47.139436  800812 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:35:47.153013  800812 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:35:47.158785  800812 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:35:47.167540  800812 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:35:47.174651  800812 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:35:47.181842  800812 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:35:47.189198  800812 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:35:47.196389  800812 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:35:47.202908  800812 kubeadm.go:392] StartCluster: {Name:old-k8s-version-120978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-120978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.103 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:35:47.203016  800812 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:35:47.203081  800812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:35:47.248116  800812 cri.go:89] found id: ""
	I1007 13:35:47.248223  800812 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:35:47.262002  800812 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:35:47.262051  800812 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:35:47.262110  800812 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:35:47.274236  800812 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:35:47.278117  800812 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-120978" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:35:47.279251  800812 kubeconfig.go:62] /home/jenkins/minikube-integration/18424-747025/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-120978" cluster setting kubeconfig missing "old-k8s-version-120978" context setting]
	I1007 13:35:47.280661  800812 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:35:47.283325  800812 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:35:47.295042  800812 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.103
	I1007 13:35:47.295091  800812 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:35:47.295109  800812 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:35:47.295183  800812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:35:47.342809  800812 cri.go:89] found id: ""
	I1007 13:35:47.342902  800812 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:35:47.360595  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:35:47.372017  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:35:47.372047  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:35:47.372095  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:35:47.383375  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:35:47.383459  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:35:47.394454  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:35:47.404922  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:35:47.404991  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:35:47.415865  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:35:47.426391  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:35:47.426450  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:35:47.437183  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:35:47.447196  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:35:47.447284  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:35:47.461131  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:35:47.472796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:35:47.627572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:35:48.194759  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:35:48.458639  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:35:48.604441  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:35:48.727979  800812 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:35:48.728123  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:49.229105  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:49.728248  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:50.228848  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:50.728256  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:51.229115  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:51.728507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:52.228409  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:52.728968  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:53.228256  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:53.729160  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:54.228921  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:54.728897  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:55.228699  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:55.728925  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:56.228543  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:56.728879  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:57.228152  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:57.729070  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:58.228492  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:58.728278  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:59.228590  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:35:59.728405  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:00.229264  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:00.729202  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:01.228502  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:01.729162  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:02.229232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:02.728418  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:03.228306  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:03.728868  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:04.229125  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:04.729082  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:05.228638  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:05.729240  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:06.228958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:06.728701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:07.228152  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:07.728612  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:08.228391  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:08.728870  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:09.228312  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:09.729250  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:10.228607  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:10.728431  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:11.228935  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:11.728593  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:12.229051  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:12.729143  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:13.228163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:13.729197  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:14.228437  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:14.729145  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:15.228249  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:15.728925  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:16.228204  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:16.728365  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:17.228657  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:17.728628  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:18.228329  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:18.728503  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:19.228886  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:19.728769  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:20.228557  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:20.728429  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:21.228393  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:21.729012  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:22.229065  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:22.729129  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:23.228873  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:23.728577  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:24.229033  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:24.729158  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:25.228396  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:25.728758  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:26.228396  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:26.728721  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:27.229058  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:27.728195  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:28.228736  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:28.728810  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:29.228801  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:29.728273  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:30.228238  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:30.729144  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:31.228417  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:31.728916  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:32.228500  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:32.728482  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:33.229092  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:33.728967  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:34.228150  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:34.728384  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:35.229089  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:35.729045  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:36.228171  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:36.728251  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:37.228196  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:37.729011  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:38.228608  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:38.729003  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:39.228269  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:39.729114  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:40.229106  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:40.728293  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:41.229138  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:41.729191  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:42.228275  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:42.728393  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:43.228929  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:43.728972  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:44.228394  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:44.728495  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:45.228398  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:45.728288  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:46.228631  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:46.728333  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:47.228272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:47.728912  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:48.228334  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:48.728895  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:36:48.728999  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:36:48.776886  800812 cri.go:89] found id: ""
	I1007 13:36:48.776915  800812 logs.go:282] 0 containers: []
	W1007 13:36:48.776924  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:36:48.776932  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:36:48.777031  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:36:48.813533  800812 cri.go:89] found id: ""
	I1007 13:36:48.813565  800812 logs.go:282] 0 containers: []
	W1007 13:36:48.813576  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:36:48.813584  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:36:48.813650  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:36:48.856075  800812 cri.go:89] found id: ""
	I1007 13:36:48.856107  800812 logs.go:282] 0 containers: []
	W1007 13:36:48.856119  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:36:48.856126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:36:48.856181  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:36:48.896612  800812 cri.go:89] found id: ""
	I1007 13:36:48.896644  800812 logs.go:282] 0 containers: []
	W1007 13:36:48.896652  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:36:48.896658  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:36:48.896714  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:36:48.929109  800812 cri.go:89] found id: ""
	I1007 13:36:48.929153  800812 logs.go:282] 0 containers: []
	W1007 13:36:48.929165  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:36:48.929174  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:36:48.929232  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:36:48.965063  800812 cri.go:89] found id: ""
	I1007 13:36:48.965107  800812 logs.go:282] 0 containers: []
	W1007 13:36:48.965121  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:36:48.965130  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:36:48.965202  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:36:49.002076  800812 cri.go:89] found id: ""
	I1007 13:36:49.002110  800812 logs.go:282] 0 containers: []
	W1007 13:36:49.002122  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:36:49.002131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:36:49.002236  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:36:49.042445  800812 cri.go:89] found id: ""
	I1007 13:36:49.042485  800812 logs.go:282] 0 containers: []
	W1007 13:36:49.042497  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:36:49.042512  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:36:49.042528  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:36:49.116348  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:36:49.116399  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:36:49.161902  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:36:49.161932  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:36:49.211832  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:36:49.211879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:36:49.225756  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:36:49.225789  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:36:49.366360  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:36:51.867080  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:51.881157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:36:51.881251  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:36:51.918616  800812 cri.go:89] found id: ""
	I1007 13:36:51.918646  800812 logs.go:282] 0 containers: []
	W1007 13:36:51.918655  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:36:51.918661  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:36:51.918742  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:36:51.971325  800812 cri.go:89] found id: ""
	I1007 13:36:51.971367  800812 logs.go:282] 0 containers: []
	W1007 13:36:51.971380  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:36:51.971388  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:36:51.971458  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:36:52.009520  800812 cri.go:89] found id: ""
	I1007 13:36:52.009559  800812 logs.go:282] 0 containers: []
	W1007 13:36:52.009570  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:36:52.009579  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:36:52.009649  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:36:52.047412  800812 cri.go:89] found id: ""
	I1007 13:36:52.047453  800812 logs.go:282] 0 containers: []
	W1007 13:36:52.047465  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:36:52.047474  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:36:52.047544  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:36:52.083410  800812 cri.go:89] found id: ""
	I1007 13:36:52.083440  800812 logs.go:282] 0 containers: []
	W1007 13:36:52.083448  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:36:52.083455  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:36:52.083509  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:36:52.125254  800812 cri.go:89] found id: ""
	I1007 13:36:52.125289  800812 logs.go:282] 0 containers: []
	W1007 13:36:52.125297  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:36:52.125304  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:36:52.125367  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:36:52.164980  800812 cri.go:89] found id: ""
	I1007 13:36:52.165016  800812 logs.go:282] 0 containers: []
	W1007 13:36:52.165025  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:36:52.165033  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:36:52.165092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:36:52.202154  800812 cri.go:89] found id: ""
	I1007 13:36:52.202182  800812 logs.go:282] 0 containers: []
	W1007 13:36:52.202191  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:36:52.202201  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:36:52.202215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:36:52.257496  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:36:52.257544  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:36:52.272363  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:36:52.272397  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:36:52.349245  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:36:52.349275  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:36:52.349293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:36:52.427034  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:36:52.427084  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:36:54.972032  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:54.986339  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:36:54.986424  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:36:55.023646  800812 cri.go:89] found id: ""
	I1007 13:36:55.023675  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.023684  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:36:55.023701  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:36:55.023770  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:36:55.060948  800812 cri.go:89] found id: ""
	I1007 13:36:55.060982  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.060995  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:36:55.061002  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:36:55.061065  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:36:55.100934  800812 cri.go:89] found id: ""
	I1007 13:36:55.100968  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.100976  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:36:55.100983  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:36:55.101038  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:36:55.139987  800812 cri.go:89] found id: ""
	I1007 13:36:55.140020  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.140030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:36:55.140038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:36:55.140092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:36:55.176556  800812 cri.go:89] found id: ""
	I1007 13:36:55.176589  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.176606  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:36:55.176614  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:36:55.176681  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:36:55.216243  800812 cri.go:89] found id: ""
	I1007 13:36:55.216279  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.216288  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:36:55.216294  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:36:55.216361  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:36:55.261242  800812 cri.go:89] found id: ""
	I1007 13:36:55.261279  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.261291  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:36:55.261299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:36:55.261366  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:36:55.298942  800812 cri.go:89] found id: ""
	I1007 13:36:55.298981  800812 logs.go:282] 0 containers: []
	W1007 13:36:55.298993  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:36:55.299007  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:36:55.299032  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:36:55.375630  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:36:55.375667  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:36:55.375685  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:36:55.453325  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:36:55.453378  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:36:55.495516  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:36:55.495555  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:36:55.545907  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:36:55.545959  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:36:58.062002  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:58.075479  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:36:58.075558  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:36:58.111172  800812 cri.go:89] found id: ""
	I1007 13:36:58.111202  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.111211  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:36:58.111218  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:36:58.111286  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:36:58.151807  800812 cri.go:89] found id: ""
	I1007 13:36:58.151846  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.151861  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:36:58.151870  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:36:58.151938  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:36:58.189849  800812 cri.go:89] found id: ""
	I1007 13:36:58.189889  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.189898  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:36:58.189907  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:36:58.189981  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:36:58.227468  800812 cri.go:89] found id: ""
	I1007 13:36:58.227500  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.227508  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:36:58.227514  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:36:58.227574  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:36:58.263451  800812 cri.go:89] found id: ""
	I1007 13:36:58.263479  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.263487  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:36:58.263493  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:36:58.263554  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:36:58.304242  800812 cri.go:89] found id: ""
	I1007 13:36:58.304274  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.304286  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:36:58.304303  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:36:58.304364  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:36:58.346393  800812 cri.go:89] found id: ""
	I1007 13:36:58.346429  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.346440  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:36:58.346449  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:36:58.346518  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:36:58.386267  800812 cri.go:89] found id: ""
	I1007 13:36:58.386304  800812 logs.go:282] 0 containers: []
	W1007 13:36:58.386336  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:36:58.386349  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:36:58.386371  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:36:58.447699  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:36:58.447774  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:36:58.472233  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:36:58.472276  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:36:58.551876  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:36:58.551906  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:36:58.551922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:36:58.627294  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:36:58.627342  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:01.173502  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:01.187870  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:01.187952  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:01.223923  800812 cri.go:89] found id: ""
	I1007 13:37:01.223960  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.223972  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:01.223980  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:01.224065  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:01.260340  800812 cri.go:89] found id: ""
	I1007 13:37:01.260381  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.260393  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:01.260401  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:01.260477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:01.298376  800812 cri.go:89] found id: ""
	I1007 13:37:01.298409  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.298418  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:01.298425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:01.298480  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:01.333715  800812 cri.go:89] found id: ""
	I1007 13:37:01.333752  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.333765  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:01.333772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:01.333842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:01.371297  800812 cri.go:89] found id: ""
	I1007 13:37:01.371334  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.371345  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:01.371353  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:01.371419  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:01.408267  800812 cri.go:89] found id: ""
	I1007 13:37:01.408298  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.408307  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:01.408314  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:01.408366  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:01.446773  800812 cri.go:89] found id: ""
	I1007 13:37:01.446806  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.446819  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:01.446827  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:01.446898  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:01.483286  800812 cri.go:89] found id: ""
	I1007 13:37:01.483324  800812 logs.go:282] 0 containers: []
	W1007 13:37:01.483333  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:01.483343  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:01.483356  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:01.533294  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:01.533344  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:01.547643  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:01.547694  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:01.633979  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:01.634008  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:01.634046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:01.710917  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:01.710949  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:04.252575  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:04.268129  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:04.268198  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:04.322672  800812 cri.go:89] found id: ""
	I1007 13:37:04.322714  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.322726  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:04.322735  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:04.322810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:04.375719  800812 cri.go:89] found id: ""
	I1007 13:37:04.375754  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.375764  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:04.375773  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:04.375828  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:04.417848  800812 cri.go:89] found id: ""
	I1007 13:37:04.417887  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.417898  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:04.417906  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:04.417981  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:04.455162  800812 cri.go:89] found id: ""
	I1007 13:37:04.455191  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.455199  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:04.455206  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:04.455265  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:04.491136  800812 cri.go:89] found id: ""
	I1007 13:37:04.491172  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.491185  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:04.491193  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:04.491266  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:04.532567  800812 cri.go:89] found id: ""
	I1007 13:37:04.532595  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.532604  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:04.532613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:04.532673  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:04.569478  800812 cri.go:89] found id: ""
	I1007 13:37:04.569506  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.569517  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:04.569525  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:04.569594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:04.606862  800812 cri.go:89] found id: ""
	I1007 13:37:04.606901  800812 logs.go:282] 0 containers: []
	W1007 13:37:04.606913  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:04.606926  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:04.606964  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:04.690747  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:04.690778  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:04.690796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:04.777894  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:04.777956  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:04.823405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:04.823449  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:04.875964  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:04.876011  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:07.391094  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:07.404761  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:07.404835  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:07.440841  800812 cri.go:89] found id: ""
	I1007 13:37:07.440886  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.440899  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:07.440907  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:07.440975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:07.479328  800812 cri.go:89] found id: ""
	I1007 13:37:07.479364  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.479375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:07.479383  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:07.479442  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:07.520210  800812 cri.go:89] found id: ""
	I1007 13:37:07.520238  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.520247  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:07.520253  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:07.520306  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:07.562648  800812 cri.go:89] found id: ""
	I1007 13:37:07.562688  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.562699  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:07.562709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:07.562786  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:07.599140  800812 cri.go:89] found id: ""
	I1007 13:37:07.599177  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.599190  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:07.599198  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:07.599267  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:07.635819  800812 cri.go:89] found id: ""
	I1007 13:37:07.635875  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.635889  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:07.635900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:07.635996  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:07.673013  800812 cri.go:89] found id: ""
	I1007 13:37:07.673057  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.673071  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:07.673082  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:07.673162  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:07.711731  800812 cri.go:89] found id: ""
	I1007 13:37:07.711763  800812 logs.go:282] 0 containers: []
	W1007 13:37:07.711775  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:07.711787  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:07.711804  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:07.763167  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:07.763211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:07.777841  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:07.777886  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:07.851433  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:07.851464  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:07.851480  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:07.933204  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:07.933274  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:10.478145  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:10.493579  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:10.493729  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:10.531408  800812 cri.go:89] found id: ""
	I1007 13:37:10.531442  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.531451  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:10.531457  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:10.531515  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:10.575564  800812 cri.go:89] found id: ""
	I1007 13:37:10.575605  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.575619  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:10.575627  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:10.575694  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:10.614515  800812 cri.go:89] found id: ""
	I1007 13:37:10.614550  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.614561  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:10.614568  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:10.614654  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:10.655593  800812 cri.go:89] found id: ""
	I1007 13:37:10.655622  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.655631  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:10.655638  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:10.655720  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:10.697778  800812 cri.go:89] found id: ""
	I1007 13:37:10.697821  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.697833  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:10.697841  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:10.697909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:10.742239  800812 cri.go:89] found id: ""
	I1007 13:37:10.742273  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.742285  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:10.742294  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:10.742354  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:10.780531  800812 cri.go:89] found id: ""
	I1007 13:37:10.780564  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.780573  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:10.780580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:10.780639  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:10.817966  800812 cri.go:89] found id: ""
	I1007 13:37:10.817996  800812 logs.go:282] 0 containers: []
	W1007 13:37:10.818006  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:10.818014  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:10.818038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:10.871697  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:10.871743  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:10.885818  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:10.885849  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:10.957104  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:10.957140  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:10.957155  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:11.041179  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:11.041228  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:13.583574  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:13.597539  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:13.597612  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:13.637556  800812 cri.go:89] found id: ""
	I1007 13:37:13.637594  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.637606  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:13.637614  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:13.637684  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:13.677505  800812 cri.go:89] found id: ""
	I1007 13:37:13.677545  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.677555  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:13.677561  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:13.677624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:13.719010  800812 cri.go:89] found id: ""
	I1007 13:37:13.719039  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.719047  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:13.719054  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:13.719118  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:13.756519  800812 cri.go:89] found id: ""
	I1007 13:37:13.756549  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.756558  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:13.756564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:13.756631  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:13.794809  800812 cri.go:89] found id: ""
	I1007 13:37:13.794838  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.794848  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:13.794856  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:13.794925  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:13.833615  800812 cri.go:89] found id: ""
	I1007 13:37:13.833650  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.833663  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:13.833672  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:13.833744  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:13.871871  800812 cri.go:89] found id: ""
	I1007 13:37:13.871905  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.871918  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:13.871926  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:13.871995  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:13.906594  800812 cri.go:89] found id: ""
	I1007 13:37:13.906628  800812 logs.go:282] 0 containers: []
	W1007 13:37:13.906636  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:13.906646  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:13.906668  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:13.961381  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:13.961425  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:13.975432  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:13.975462  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:14.049693  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:14.049719  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:14.049736  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:14.129142  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:14.129195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:16.673171  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:16.688333  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:16.688401  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:16.729047  800812 cri.go:89] found id: ""
	I1007 13:37:16.729085  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.729098  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:16.729107  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:16.729179  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:16.769484  800812 cri.go:89] found id: ""
	I1007 13:37:16.769515  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.769528  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:16.769536  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:16.769603  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:16.809488  800812 cri.go:89] found id: ""
	I1007 13:37:16.809519  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.809528  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:16.809535  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:16.809601  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:16.845784  800812 cri.go:89] found id: ""
	I1007 13:37:16.845820  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.845831  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:16.845840  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:16.845917  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:16.884741  800812 cri.go:89] found id: ""
	I1007 13:37:16.884769  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.884778  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:16.884785  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:16.884848  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:16.930574  800812 cri.go:89] found id: ""
	I1007 13:37:16.930631  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.930642  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:16.930650  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:16.930743  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:16.969031  800812 cri.go:89] found id: ""
	I1007 13:37:16.969064  800812 logs.go:282] 0 containers: []
	W1007 13:37:16.969082  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:16.969090  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:16.969169  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:17.008742  800812 cri.go:89] found id: ""
	I1007 13:37:17.008781  800812 logs.go:282] 0 containers: []
	W1007 13:37:17.008792  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:17.008814  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:17.008837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:17.062059  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:17.062102  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:17.077540  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:17.077573  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:17.155751  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:17.155789  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:17.155806  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:17.241594  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:17.241644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:19.784947  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:19.799403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:19.799486  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:19.835271  800812 cri.go:89] found id: ""
	I1007 13:37:19.835300  800812 logs.go:282] 0 containers: []
	W1007 13:37:19.835309  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:19.835316  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:19.835369  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:19.877587  800812 cri.go:89] found id: ""
	I1007 13:37:19.877624  800812 logs.go:282] 0 containers: []
	W1007 13:37:19.877637  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:19.877645  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:19.877706  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:19.914102  800812 cri.go:89] found id: ""
	I1007 13:37:19.914133  800812 logs.go:282] 0 containers: []
	W1007 13:37:19.914145  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:19.914152  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:19.914232  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:19.950204  800812 cri.go:89] found id: ""
	I1007 13:37:19.950236  800812 logs.go:282] 0 containers: []
	W1007 13:37:19.950245  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:19.950252  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:19.950318  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:19.986630  800812 cri.go:89] found id: ""
	I1007 13:37:19.986661  800812 logs.go:282] 0 containers: []
	W1007 13:37:19.986672  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:19.986681  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:19.986759  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:20.023734  800812 cri.go:89] found id: ""
	I1007 13:37:20.023765  800812 logs.go:282] 0 containers: []
	W1007 13:37:20.023774  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:20.023780  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:20.023845  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:20.058326  800812 cri.go:89] found id: ""
	I1007 13:37:20.058360  800812 logs.go:282] 0 containers: []
	W1007 13:37:20.058369  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:20.058375  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:20.058429  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:20.097011  800812 cri.go:89] found id: ""
	I1007 13:37:20.097040  800812 logs.go:282] 0 containers: []
	W1007 13:37:20.097052  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:20.097065  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:20.097093  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:20.173600  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:20.173648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:20.215297  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:20.215340  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:20.265333  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:20.265384  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:20.279950  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:20.279999  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:20.355544  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:22.856504  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:22.870746  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:22.870823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:22.910810  800812 cri.go:89] found id: ""
	I1007 13:37:22.910838  800812 logs.go:282] 0 containers: []
	W1007 13:37:22.910847  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:22.910853  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:22.910905  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:22.947771  800812 cri.go:89] found id: ""
	I1007 13:37:22.947805  800812 logs.go:282] 0 containers: []
	W1007 13:37:22.947814  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:22.947820  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:22.947876  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:22.982095  800812 cri.go:89] found id: ""
	I1007 13:37:22.982126  800812 logs.go:282] 0 containers: []
	W1007 13:37:22.982134  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:22.982141  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:22.982195  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:23.027331  800812 cri.go:89] found id: ""
	I1007 13:37:23.027362  800812 logs.go:282] 0 containers: []
	W1007 13:37:23.027372  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:23.027378  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:23.027436  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:23.064602  800812 cri.go:89] found id: ""
	I1007 13:37:23.064645  800812 logs.go:282] 0 containers: []
	W1007 13:37:23.064653  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:23.064660  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:23.064718  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:23.105066  800812 cri.go:89] found id: ""
	I1007 13:37:23.105097  800812 logs.go:282] 0 containers: []
	W1007 13:37:23.105104  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:23.105110  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:23.105174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:23.145409  800812 cri.go:89] found id: ""
	I1007 13:37:23.145444  800812 logs.go:282] 0 containers: []
	W1007 13:37:23.145456  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:23.145466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:23.145534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:23.181850  800812 cri.go:89] found id: ""
	I1007 13:37:23.181885  800812 logs.go:282] 0 containers: []
	W1007 13:37:23.181895  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:23.181909  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:23.181926  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:23.257092  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:23.257140  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:23.297191  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:23.297233  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:23.356251  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:23.356307  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:23.371204  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:23.371249  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:23.450706  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:25.951306  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:25.968173  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:25.968248  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:26.006711  800812 cri.go:89] found id: ""
	I1007 13:37:26.006741  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.006750  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:26.006761  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:26.006836  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:26.045371  800812 cri.go:89] found id: ""
	I1007 13:37:26.045406  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.045417  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:26.045426  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:26.045488  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:26.088169  800812 cri.go:89] found id: ""
	I1007 13:37:26.088203  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.088214  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:26.088223  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:26.088289  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:26.125995  800812 cri.go:89] found id: ""
	I1007 13:37:26.126053  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.126067  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:26.126079  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:26.126147  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:26.165394  800812 cri.go:89] found id: ""
	I1007 13:37:26.165429  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.165438  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:26.165444  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:26.165506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:26.203724  800812 cri.go:89] found id: ""
	I1007 13:37:26.203749  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.203757  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:26.203763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:26.203825  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:26.241998  800812 cri.go:89] found id: ""
	I1007 13:37:26.242055  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.242074  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:26.242082  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:26.242147  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:26.277939  800812 cri.go:89] found id: ""
	I1007 13:37:26.277967  800812 logs.go:282] 0 containers: []
	W1007 13:37:26.277977  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:26.277989  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:26.278014  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:26.292349  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:26.292385  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:26.375355  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:26.375391  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:26.375409  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:26.453060  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:26.453105  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:26.494810  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:26.494848  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:29.047635  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:29.062497  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:29.062567  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:29.096436  800812 cri.go:89] found id: ""
	I1007 13:37:29.096469  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.096480  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:29.096489  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:29.096557  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:29.134157  800812 cri.go:89] found id: ""
	I1007 13:37:29.134197  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.134206  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:29.134231  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:29.134289  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:29.172610  800812 cri.go:89] found id: ""
	I1007 13:37:29.172641  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.172652  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:29.172661  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:29.172719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:29.205797  800812 cri.go:89] found id: ""
	I1007 13:37:29.205828  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.205838  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:29.205845  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:29.205913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:29.242089  800812 cri.go:89] found id: ""
	I1007 13:37:29.242120  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.242130  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:29.242141  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:29.242207  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:29.281286  800812 cri.go:89] found id: ""
	I1007 13:37:29.281321  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.281336  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:29.281346  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:29.281490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:29.315529  800812 cri.go:89] found id: ""
	I1007 13:37:29.315559  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.315568  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:29.315577  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:29.315655  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:29.354001  800812 cri.go:89] found id: ""
	I1007 13:37:29.354052  800812 logs.go:282] 0 containers: []
	W1007 13:37:29.354063  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:29.354078  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:29.354095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:29.407397  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:29.407447  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:29.422203  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:29.422237  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:29.495906  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:29.495929  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:29.495944  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:29.574932  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:29.574975  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:32.117672  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:32.132766  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:32.132866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:32.172804  800812 cri.go:89] found id: ""
	I1007 13:37:32.172841  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.172854  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:32.172863  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:32.172934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:32.215115  800812 cri.go:89] found id: ""
	I1007 13:37:32.215144  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.215156  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:32.215165  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:32.215252  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:32.255290  800812 cri.go:89] found id: ""
	I1007 13:37:32.255328  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.255338  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:32.255345  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:32.255411  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:32.293802  800812 cri.go:89] found id: ""
	I1007 13:37:32.293838  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.293850  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:32.293859  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:32.293932  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:32.328694  800812 cri.go:89] found id: ""
	I1007 13:37:32.328728  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.328739  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:32.328747  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:32.328820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:32.364551  800812 cri.go:89] found id: ""
	I1007 13:37:32.364584  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.364593  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:32.364599  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:32.364657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:32.400807  800812 cri.go:89] found id: ""
	I1007 13:37:32.400838  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.400847  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:32.400854  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:32.400927  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:32.441284  800812 cri.go:89] found id: ""
	I1007 13:37:32.441316  800812 logs.go:282] 0 containers: []
	W1007 13:37:32.441328  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:32.441338  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:32.441352  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:32.527595  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:32.527646  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:32.527666  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:32.609601  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:32.609662  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:32.652826  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:32.652860  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:32.712391  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:32.712435  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:35.227251  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:35.242767  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:35.242904  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:35.281281  800812 cri.go:89] found id: ""
	I1007 13:37:35.281320  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.281332  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:35.281341  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:35.281408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:35.317159  800812 cri.go:89] found id: ""
	I1007 13:37:35.317193  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.317203  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:35.317209  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:35.317264  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:35.358213  800812 cri.go:89] found id: ""
	I1007 13:37:35.358247  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.358259  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:35.358267  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:35.358339  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:35.399923  800812 cri.go:89] found id: ""
	I1007 13:37:35.399956  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.399991  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:35.400000  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:35.400093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:35.440660  800812 cri.go:89] found id: ""
	I1007 13:37:35.440699  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.440710  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:35.440718  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:35.440792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:35.481569  800812 cri.go:89] found id: ""
	I1007 13:37:35.481604  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.481616  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:35.481625  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:35.481703  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:35.518981  800812 cri.go:89] found id: ""
	I1007 13:37:35.519010  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.519021  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:35.519029  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:35.519096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:35.559218  800812 cri.go:89] found id: ""
	I1007 13:37:35.559257  800812 logs.go:282] 0 containers: []
	W1007 13:37:35.559269  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:35.559282  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:35.559306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:35.621496  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:35.621543  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:35.636764  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:35.636794  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:35.722990  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:35.723015  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:35.723031  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:35.800455  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:35.800503  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:38.341880  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:38.356880  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:38.356953  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:38.392589  800812 cri.go:89] found id: ""
	I1007 13:37:38.392618  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.392627  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:38.392634  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:38.392686  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:38.427881  800812 cri.go:89] found id: ""
	I1007 13:37:38.427917  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.427930  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:38.427938  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:38.428012  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:38.482270  800812 cri.go:89] found id: ""
	I1007 13:37:38.482308  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.482320  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:38.482328  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:38.482398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:38.522709  800812 cri.go:89] found id: ""
	I1007 13:37:38.522740  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.522750  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:38.522758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:38.522813  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:38.561266  800812 cri.go:89] found id: ""
	I1007 13:37:38.561305  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.561317  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:38.561326  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:38.561394  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:38.601292  800812 cri.go:89] found id: ""
	I1007 13:37:38.601329  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.601341  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:38.601350  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:38.601418  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:38.642284  800812 cri.go:89] found id: ""
	I1007 13:37:38.642313  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.642322  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:38.642328  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:38.642380  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:38.681222  800812 cri.go:89] found id: ""
	I1007 13:37:38.681264  800812 logs.go:282] 0 containers: []
	W1007 13:37:38.681279  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:38.681293  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:38.681314  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:38.726927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:38.726966  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:38.782348  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:38.782395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:38.797537  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:38.797568  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:38.875107  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:38.875132  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:38.875145  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:41.458594  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:41.474058  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:41.474139  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:41.517285  800812 cri.go:89] found id: ""
	I1007 13:37:41.517315  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.517323  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:41.517330  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:41.517385  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:41.561591  800812 cri.go:89] found id: ""
	I1007 13:37:41.561623  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.561632  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:41.561638  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:41.561698  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:41.607458  800812 cri.go:89] found id: ""
	I1007 13:37:41.607493  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.607506  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:41.607515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:41.607582  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:41.648319  800812 cri.go:89] found id: ""
	I1007 13:37:41.648353  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.648362  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:41.648368  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:41.648424  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:41.690714  800812 cri.go:89] found id: ""
	I1007 13:37:41.690743  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.690751  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:41.690757  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:41.690839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:41.733670  800812 cri.go:89] found id: ""
	I1007 13:37:41.733709  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.733721  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:41.733729  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:41.733799  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:41.769392  800812 cri.go:89] found id: ""
	I1007 13:37:41.769428  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.769440  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:41.769448  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:41.769524  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:41.803678  800812 cri.go:89] found id: ""
	I1007 13:37:41.803715  800812 logs.go:282] 0 containers: []
	W1007 13:37:41.803724  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:41.803735  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:41.803749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:41.855234  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:41.855284  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:41.869458  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:41.869493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:41.947427  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:41.947458  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:41.947476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:42.026069  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:42.026113  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:44.568850  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:44.583853  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:44.583928  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:44.618928  800812 cri.go:89] found id: ""
	I1007 13:37:44.618957  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.618965  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:44.618972  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:44.619043  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:44.656410  800812 cri.go:89] found id: ""
	I1007 13:37:44.656441  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.656449  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:44.656455  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:44.656521  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:44.695439  800812 cri.go:89] found id: ""
	I1007 13:37:44.695471  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.695481  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:44.695490  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:44.695565  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:44.734231  800812 cri.go:89] found id: ""
	I1007 13:37:44.734262  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.734271  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:44.734277  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:44.734356  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:44.774157  800812 cri.go:89] found id: ""
	I1007 13:37:44.774193  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.774204  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:44.774213  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:44.774286  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:44.812609  800812 cri.go:89] found id: ""
	I1007 13:37:44.812648  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.812660  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:44.812668  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:44.812758  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:44.850925  800812 cri.go:89] found id: ""
	I1007 13:37:44.850965  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.850976  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:44.850985  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:44.851056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:44.892286  800812 cri.go:89] found id: ""
	I1007 13:37:44.892317  800812 logs.go:282] 0 containers: []
	W1007 13:37:44.892325  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:44.892336  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:44.892349  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:44.911188  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:44.911225  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:45.005554  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:45.005579  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:45.005599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:45.085586  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:45.085630  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:45.127891  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:45.127922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:47.680536  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:47.696842  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:47.696926  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:47.733435  800812 cri.go:89] found id: ""
	I1007 13:37:47.733472  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.733484  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:47.733493  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:47.733566  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:47.773673  800812 cri.go:89] found id: ""
	I1007 13:37:47.773707  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.773717  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:47.773723  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:47.773778  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:47.811853  800812 cri.go:89] found id: ""
	I1007 13:37:47.811891  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.811904  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:47.811912  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:47.811989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:47.848361  800812 cri.go:89] found id: ""
	I1007 13:37:47.848392  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.848405  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:47.848413  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:47.848473  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:47.883748  800812 cri.go:89] found id: ""
	I1007 13:37:47.883786  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.883794  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:47.883801  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:47.883854  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:47.920678  800812 cri.go:89] found id: ""
	I1007 13:37:47.920710  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.920719  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:47.920725  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:47.920791  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:47.963124  800812 cri.go:89] found id: ""
	I1007 13:37:47.963159  800812 logs.go:282] 0 containers: []
	W1007 13:37:47.963169  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:47.963178  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:47.963242  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:48.001374  800812 cri.go:89] found id: ""
	I1007 13:37:48.001419  800812 logs.go:282] 0 containers: []
	W1007 13:37:48.001432  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:48.001445  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:48.001463  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:48.053134  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:48.053178  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:48.067335  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:48.067372  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:48.145660  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:48.145689  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:48.145707  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:48.226880  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:48.226930  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:50.769642  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:50.785973  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:50.786094  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:50.829528  800812 cri.go:89] found id: ""
	I1007 13:37:50.829559  800812 logs.go:282] 0 containers: []
	W1007 13:37:50.829568  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:50.829575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:50.829628  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:50.882939  800812 cri.go:89] found id: ""
	I1007 13:37:50.882972  800812 logs.go:282] 0 containers: []
	W1007 13:37:50.882981  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:50.882986  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:50.883043  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:50.928298  800812 cri.go:89] found id: ""
	I1007 13:37:50.928333  800812 logs.go:282] 0 containers: []
	W1007 13:37:50.928342  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:50.928349  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:50.928414  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:50.965952  800812 cri.go:89] found id: ""
	I1007 13:37:50.965998  800812 logs.go:282] 0 containers: []
	W1007 13:37:50.966010  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:50.966019  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:50.966097  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:51.003616  800812 cri.go:89] found id: ""
	I1007 13:37:51.003669  800812 logs.go:282] 0 containers: []
	W1007 13:37:51.003680  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:51.003687  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:51.003770  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:51.039700  800812 cri.go:89] found id: ""
	I1007 13:37:51.039749  800812 logs.go:282] 0 containers: []
	W1007 13:37:51.039761  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:51.039770  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:51.039838  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:51.076322  800812 cri.go:89] found id: ""
	I1007 13:37:51.076356  800812 logs.go:282] 0 containers: []
	W1007 13:37:51.076368  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:51.076375  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:51.076446  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:51.114071  800812 cri.go:89] found id: ""
	I1007 13:37:51.114107  800812 logs.go:282] 0 containers: []
	W1007 13:37:51.114120  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:51.114133  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:51.114150  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:51.168411  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:51.168462  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:51.183420  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:51.183460  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:51.259324  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:51.259349  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:51.259367  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:51.344089  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:51.344139  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:53.887334  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:53.902120  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:53.902214  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:53.941939  800812 cri.go:89] found id: ""
	I1007 13:37:53.941989  800812 logs.go:282] 0 containers: []
	W1007 13:37:53.942002  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:53.942011  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:53.942095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:53.978059  800812 cri.go:89] found id: ""
	I1007 13:37:53.978093  800812 logs.go:282] 0 containers: []
	W1007 13:37:53.978106  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:53.978114  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:53.978185  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:54.015441  800812 cri.go:89] found id: ""
	I1007 13:37:54.015465  800812 logs.go:282] 0 containers: []
	W1007 13:37:54.015473  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:54.015479  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:54.015531  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:54.053301  800812 cri.go:89] found id: ""
	I1007 13:37:54.053333  800812 logs.go:282] 0 containers: []
	W1007 13:37:54.053342  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:54.053348  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:54.053406  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:54.087724  800812 cri.go:89] found id: ""
	I1007 13:37:54.087754  800812 logs.go:282] 0 containers: []
	W1007 13:37:54.087763  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:54.087769  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:54.087842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:54.129064  800812 cri.go:89] found id: ""
	I1007 13:37:54.129091  800812 logs.go:282] 0 containers: []
	W1007 13:37:54.129099  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:54.129105  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:54.129159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:54.166307  800812 cri.go:89] found id: ""
	I1007 13:37:54.166347  800812 logs.go:282] 0 containers: []
	W1007 13:37:54.166361  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:54.166369  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:54.166438  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:54.210762  800812 cri.go:89] found id: ""
	I1007 13:37:54.210793  800812 logs.go:282] 0 containers: []
	W1007 13:37:54.210806  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:54.210818  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:54.210835  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:54.263197  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:54.263241  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:54.277179  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:54.277212  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:54.347949  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:54.347980  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:54.347997  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:54.427124  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:54.427171  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:56.971601  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:37:56.987526  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:56.987608  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:57.028401  800812 cri.go:89] found id: ""
	I1007 13:37:57.028440  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.028450  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:37:57.028457  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:57.028513  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:57.066747  800812 cri.go:89] found id: ""
	I1007 13:37:57.066786  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.066801  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:37:57.066810  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:57.066867  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:57.104319  800812 cri.go:89] found id: ""
	I1007 13:37:57.104347  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.104358  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:37:57.104367  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:57.104436  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:57.142532  800812 cri.go:89] found id: ""
	I1007 13:37:57.142561  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.142573  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:37:57.142582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:57.142642  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:57.178210  800812 cri.go:89] found id: ""
	I1007 13:37:57.178242  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.178251  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:37:57.178258  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:57.178320  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:57.213853  800812 cri.go:89] found id: ""
	I1007 13:37:57.213884  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.213895  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:37:57.213902  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:57.213968  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:57.249442  800812 cri.go:89] found id: ""
	I1007 13:37:57.249479  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.249490  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:37:57.249499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:37:57.249573  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:37:57.285462  800812 cri.go:89] found id: ""
	I1007 13:37:57.285493  800812 logs.go:282] 0 containers: []
	W1007 13:37:57.285504  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:37:57.285517  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:57.285539  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:57.335155  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:57.335198  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:57.348755  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:57.348787  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:37:57.422235  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:37:57.422266  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:57.422284  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:57.501878  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:37:57.501938  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:00.042956  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:00.057057  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:00.057147  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:00.092659  800812 cri.go:89] found id: ""
	I1007 13:38:00.092697  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.092709  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:00.092717  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:00.092802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:00.129653  800812 cri.go:89] found id: ""
	I1007 13:38:00.129685  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.129693  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:00.129700  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:00.129780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:00.167845  800812 cri.go:89] found id: ""
	I1007 13:38:00.167880  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.167889  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:00.167897  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:00.167959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:00.205330  800812 cri.go:89] found id: ""
	I1007 13:38:00.205367  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.205376  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:00.205389  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:00.205450  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:00.250246  800812 cri.go:89] found id: ""
	I1007 13:38:00.250277  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.250287  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:00.250293  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:00.250349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:00.286516  800812 cri.go:89] found id: ""
	I1007 13:38:00.286554  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.286566  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:00.286578  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:00.286642  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:00.328037  800812 cri.go:89] found id: ""
	I1007 13:38:00.328076  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.328086  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:00.328096  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:00.328172  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:00.367799  800812 cri.go:89] found id: ""
	I1007 13:38:00.367837  800812 logs.go:282] 0 containers: []
	W1007 13:38:00.367845  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:00.367856  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:00.367870  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:00.420657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:00.420705  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:00.434259  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:00.434291  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:00.504693  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:00.504727  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:00.504745  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:00.585536  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:00.585582  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:03.124930  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:03.139002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:03.139080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:03.175427  800812 cri.go:89] found id: ""
	I1007 13:38:03.175467  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.175479  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:03.175488  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:03.175551  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:03.211992  800812 cri.go:89] found id: ""
	I1007 13:38:03.212034  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.212045  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:03.212053  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:03.212119  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:03.247345  800812 cri.go:89] found id: ""
	I1007 13:38:03.247380  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.247394  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:03.247405  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:03.247470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:03.285171  800812 cri.go:89] found id: ""
	I1007 13:38:03.285201  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.285210  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:03.285217  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:03.285284  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:03.322210  800812 cri.go:89] found id: ""
	I1007 13:38:03.322247  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.322260  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:03.322269  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:03.322381  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:03.359030  800812 cri.go:89] found id: ""
	I1007 13:38:03.359069  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.359081  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:03.359089  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:03.359164  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:03.397633  800812 cri.go:89] found id: ""
	I1007 13:38:03.397662  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.397671  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:03.397680  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:03.397745  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:03.433162  800812 cri.go:89] found id: ""
	I1007 13:38:03.433198  800812 logs.go:282] 0 containers: []
	W1007 13:38:03.433209  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:03.433219  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:03.433234  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:03.482842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:03.482884  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:03.498251  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:03.498290  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:03.572475  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:03.572507  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:03.572525  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:03.650623  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:03.650674  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:06.196119  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:06.210233  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:06.210312  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:06.246886  800812 cri.go:89] found id: ""
	I1007 13:38:06.246928  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.246940  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:06.246958  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:06.247025  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:06.292633  800812 cri.go:89] found id: ""
	I1007 13:38:06.292663  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.292674  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:06.292682  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:06.292759  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:06.333612  800812 cri.go:89] found id: ""
	I1007 13:38:06.333642  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.333652  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:06.333661  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:06.333728  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:06.373114  800812 cri.go:89] found id: ""
	I1007 13:38:06.373142  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.373151  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:06.373159  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:06.373226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:06.419407  800812 cri.go:89] found id: ""
	I1007 13:38:06.419440  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.419448  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:06.419454  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:06.419526  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:06.457634  800812 cri.go:89] found id: ""
	I1007 13:38:06.457666  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.457677  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:06.457685  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:06.457758  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:06.494318  800812 cri.go:89] found id: ""
	I1007 13:38:06.494356  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.494369  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:06.494377  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:06.494450  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:06.531071  800812 cri.go:89] found id: ""
	I1007 13:38:06.531100  800812 logs.go:282] 0 containers: []
	W1007 13:38:06.531109  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:06.531119  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:06.531132  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:06.581508  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:06.581562  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:06.596873  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:06.596912  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:06.671076  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:06.671111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:06.671130  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:06.747855  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:06.747902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:09.294160  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:09.309334  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:09.309429  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:09.347540  800812 cri.go:89] found id: ""
	I1007 13:38:09.347569  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.347580  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:09.347589  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:09.347651  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:09.383570  800812 cri.go:89] found id: ""
	I1007 13:38:09.383605  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.383618  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:09.383626  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:09.383714  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:09.420623  800812 cri.go:89] found id: ""
	I1007 13:38:09.420654  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.420664  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:09.420673  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:09.420757  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:09.456344  800812 cri.go:89] found id: ""
	I1007 13:38:09.456376  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.456388  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:09.456397  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:09.456470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:09.493646  800812 cri.go:89] found id: ""
	I1007 13:38:09.493679  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.493688  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:09.493694  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:09.493752  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:09.531609  800812 cri.go:89] found id: ""
	I1007 13:38:09.531644  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.531664  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:09.531673  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:09.531747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:09.566947  800812 cri.go:89] found id: ""
	I1007 13:38:09.566983  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.566994  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:09.567002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:09.567094  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:09.605528  800812 cri.go:89] found id: ""
	I1007 13:38:09.605559  800812 logs.go:282] 0 containers: []
	W1007 13:38:09.605568  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:09.605578  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:09.605592  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:09.662519  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:09.662574  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:09.677572  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:09.677614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:09.753427  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:09.753456  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:09.753472  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:09.827879  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:09.827932  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:12.368845  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:12.383281  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:12.383357  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:12.417996  800812 cri.go:89] found id: ""
	I1007 13:38:12.418054  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.418067  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:12.418077  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:12.418159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:12.455218  800812 cri.go:89] found id: ""
	I1007 13:38:12.455252  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.455261  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:12.455268  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:12.455324  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:12.492610  800812 cri.go:89] found id: ""
	I1007 13:38:12.492648  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.492668  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:12.492677  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:12.492757  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:12.531576  800812 cri.go:89] found id: ""
	I1007 13:38:12.531612  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.531624  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:12.531632  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:12.531707  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:12.569401  800812 cri.go:89] found id: ""
	I1007 13:38:12.569438  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.569451  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:12.569459  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:12.569530  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:12.613103  800812 cri.go:89] found id: ""
	I1007 13:38:12.613144  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.613157  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:12.613166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:12.613239  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:12.649816  800812 cri.go:89] found id: ""
	I1007 13:38:12.649848  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.649856  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:12.649862  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:12.649929  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:12.685183  800812 cri.go:89] found id: ""
	I1007 13:38:12.685222  800812 logs.go:282] 0 containers: []
	W1007 13:38:12.685233  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:12.685247  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:12.685264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:12.743052  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:12.743106  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:12.757338  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:12.757380  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:12.828214  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:12.828237  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:12.828251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:12.904009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:12.904054  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:15.446751  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:15.461662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:15.461735  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:15.502921  800812 cri.go:89] found id: ""
	I1007 13:38:15.502953  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.502964  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:15.502972  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:15.503048  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:15.541954  800812 cri.go:89] found id: ""
	I1007 13:38:15.541988  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.542094  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:15.542128  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:15.542212  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:15.578409  800812 cri.go:89] found id: ""
	I1007 13:38:15.578439  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.578447  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:15.578453  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:15.578512  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:15.612260  800812 cri.go:89] found id: ""
	I1007 13:38:15.612299  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.612307  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:15.612314  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:15.612380  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:15.648775  800812 cri.go:89] found id: ""
	I1007 13:38:15.648824  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.648836  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:15.648846  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:15.648905  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:15.685596  800812 cri.go:89] found id: ""
	I1007 13:38:15.685630  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.685639  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:15.685646  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:15.685718  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:15.726447  800812 cri.go:89] found id: ""
	I1007 13:38:15.726477  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.726490  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:15.726499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:15.726568  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:15.762094  800812 cri.go:89] found id: ""
	I1007 13:38:15.762129  800812 logs.go:282] 0 containers: []
	W1007 13:38:15.762142  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:15.762154  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:15.762173  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:15.814994  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:15.815039  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:15.829791  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:15.829824  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:15.910476  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:15.910503  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:15.910516  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:15.989742  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:15.989792  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:18.531311  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:18.545486  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:18.545575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:18.580907  800812 cri.go:89] found id: ""
	I1007 13:38:18.580946  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.580956  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:18.580963  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:18.581021  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:18.624036  800812 cri.go:89] found id: ""
	I1007 13:38:18.624071  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.624080  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:18.624086  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:18.624147  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:18.660220  800812 cri.go:89] found id: ""
	I1007 13:38:18.660257  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.660269  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:18.660278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:18.660351  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:18.696961  800812 cri.go:89] found id: ""
	I1007 13:38:18.697008  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.697020  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:18.697029  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:18.697100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:18.734474  800812 cri.go:89] found id: ""
	I1007 13:38:18.734504  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.734515  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:18.734522  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:18.734585  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:18.770871  800812 cri.go:89] found id: ""
	I1007 13:38:18.770909  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.770921  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:18.770930  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:18.771000  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:18.808706  800812 cri.go:89] found id: ""
	I1007 13:38:18.808740  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.808750  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:18.808756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:18.808828  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:18.843144  800812 cri.go:89] found id: ""
	I1007 13:38:18.843172  800812 logs.go:282] 0 containers: []
	W1007 13:38:18.843181  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:18.843191  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:18.843205  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:18.884863  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:18.884900  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:18.934893  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:18.934937  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:18.962413  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:18.962468  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:19.037413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:19.037440  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:19.037456  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:21.616032  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:21.629326  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:21.629393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:21.666850  800812 cri.go:89] found id: ""
	I1007 13:38:21.666882  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.666891  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:21.666898  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:21.666952  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:21.705202  800812 cri.go:89] found id: ""
	I1007 13:38:21.705238  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.705247  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:21.705256  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:21.705330  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:21.740070  800812 cri.go:89] found id: ""
	I1007 13:38:21.740108  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.740119  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:21.740130  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:21.740193  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:21.775857  800812 cri.go:89] found id: ""
	I1007 13:38:21.775886  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.775897  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:21.775905  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:21.775979  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:21.811892  800812 cri.go:89] found id: ""
	I1007 13:38:21.811925  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.811935  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:21.811942  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:21.812025  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:21.847757  800812 cri.go:89] found id: ""
	I1007 13:38:21.847798  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.847810  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:21.847819  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:21.847883  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:21.884024  800812 cri.go:89] found id: ""
	I1007 13:38:21.884061  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.884073  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:21.884083  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:21.884150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:21.920249  800812 cri.go:89] found id: ""
	I1007 13:38:21.920279  800812 logs.go:282] 0 containers: []
	W1007 13:38:21.920288  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:21.920297  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:21.920312  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:21.994696  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:21.994758  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:22.042199  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:22.042239  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:22.097208  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:22.097265  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:22.113314  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:22.113348  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:22.185300  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:24.686229  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:24.699391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:24.699485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:24.759527  800812 cri.go:89] found id: ""
	I1007 13:38:24.759563  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.759576  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:24.759587  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:24.759651  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:24.794231  800812 cri.go:89] found id: ""
	I1007 13:38:24.794264  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.794276  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:24.794286  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:24.794353  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:24.834338  800812 cri.go:89] found id: ""
	I1007 13:38:24.834383  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.834395  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:24.834407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:24.834485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:24.870516  800812 cri.go:89] found id: ""
	I1007 13:38:24.870545  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.870553  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:24.870559  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:24.870616  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:24.907594  800812 cri.go:89] found id: ""
	I1007 13:38:24.907627  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.907638  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:24.907646  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:24.907718  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:24.944336  800812 cri.go:89] found id: ""
	I1007 13:38:24.944374  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.944387  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:24.944396  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:24.944470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:24.984950  800812 cri.go:89] found id: ""
	I1007 13:38:24.984978  800812 logs.go:282] 0 containers: []
	W1007 13:38:24.984992  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:24.985000  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:24.985066  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:25.031919  800812 cri.go:89] found id: ""
	I1007 13:38:25.031957  800812 logs.go:282] 0 containers: []
	W1007 13:38:25.031970  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:25.031984  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:25.032000  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:25.081341  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:25.081381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:25.095455  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:25.095489  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:25.162848  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:25.162883  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:25.162898  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:25.243243  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:25.243290  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:27.786431  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:27.800460  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:27.800550  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:27.837044  800812 cri.go:89] found id: ""
	I1007 13:38:27.837075  800812 logs.go:282] 0 containers: []
	W1007 13:38:27.837084  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:27.837091  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:27.837147  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:27.872945  800812 cri.go:89] found id: ""
	I1007 13:38:27.872988  800812 logs.go:282] 0 containers: []
	W1007 13:38:27.873001  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:27.873010  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:27.873078  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:27.909641  800812 cri.go:89] found id: ""
	I1007 13:38:27.909674  800812 logs.go:282] 0 containers: []
	W1007 13:38:27.909685  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:27.909694  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:27.909766  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:27.945930  800812 cri.go:89] found id: ""
	I1007 13:38:27.945964  800812 logs.go:282] 0 containers: []
	W1007 13:38:27.945975  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:27.945984  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:27.946061  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:27.985977  800812 cri.go:89] found id: ""
	I1007 13:38:27.986034  800812 logs.go:282] 0 containers: []
	W1007 13:38:27.986047  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:27.986059  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:27.986129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:28.027380  800812 cri.go:89] found id: ""
	I1007 13:38:28.027414  800812 logs.go:282] 0 containers: []
	W1007 13:38:28.027425  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:28.027433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:28.027500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:28.073379  800812 cri.go:89] found id: ""
	I1007 13:38:28.073417  800812 logs.go:282] 0 containers: []
	W1007 13:38:28.073429  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:28.073438  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:28.073509  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:28.109538  800812 cri.go:89] found id: ""
	I1007 13:38:28.109572  800812 logs.go:282] 0 containers: []
	W1007 13:38:28.109584  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:28.109596  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:28.109612  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:28.161860  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:28.161913  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:28.176547  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:28.176583  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:28.251658  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:28.251686  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:28.251703  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:28.334928  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:28.334979  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:30.877410  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:30.891736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:30.891810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:30.926900  800812 cri.go:89] found id: ""
	I1007 13:38:30.926934  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.926945  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:30.926953  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:30.927020  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:30.962704  800812 cri.go:89] found id: ""
	I1007 13:38:30.962742  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.962760  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:30.962769  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:30.962839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:31.000947  800812 cri.go:89] found id: ""
	I1007 13:38:31.000986  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.000999  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:31.001009  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:31.001079  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:31.040687  800812 cri.go:89] found id: ""
	I1007 13:38:31.040734  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.040743  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:31.040750  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:31.040808  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:31.077841  800812 cri.go:89] found id: ""
	I1007 13:38:31.077872  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.077891  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:31.077900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:31.077975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:31.128590  800812 cri.go:89] found id: ""
	I1007 13:38:31.128625  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.128638  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:31.128736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:31.128947  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:31.170110  800812 cri.go:89] found id: ""
	I1007 13:38:31.170140  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.170149  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:31.170157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:31.170231  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:31.229262  800812 cri.go:89] found id: ""
	I1007 13:38:31.229297  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.229310  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:31.229327  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:31.229343  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:31.281680  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:31.281727  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:31.296076  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:31.296111  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:31.367443  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:31.367468  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:31.367488  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:31.449882  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:31.449933  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:33.993958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:34.007064  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:34.007150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:34.043479  800812 cri.go:89] found id: ""
	I1007 13:38:34.043517  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.043529  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:34.043537  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:34.043609  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:34.080953  800812 cri.go:89] found id: ""
	I1007 13:38:34.081006  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.081019  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:34.081028  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:34.081100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:34.117708  800812 cri.go:89] found id: ""
	I1007 13:38:34.117741  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.117749  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:34.117756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:34.117823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:34.154457  800812 cri.go:89] found id: ""
	I1007 13:38:34.154487  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.154499  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:34.154507  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:34.154586  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:34.192037  800812 cri.go:89] found id: ""
	I1007 13:38:34.192070  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.192080  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:34.192088  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:34.192159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:34.230404  800812 cri.go:89] found id: ""
	I1007 13:38:34.230441  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.230453  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:34.230461  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:34.230529  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:34.266650  800812 cri.go:89] found id: ""
	I1007 13:38:34.266712  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.266726  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:34.266736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:34.266832  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:34.302731  800812 cri.go:89] found id: ""
	I1007 13:38:34.302767  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.302784  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:34.302807  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:34.302828  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:34.377367  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:34.377400  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:34.377417  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:34.453185  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:34.453232  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:34.498235  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:34.498269  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:34.548177  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:34.548224  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.065875  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:37.079772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:37.079868  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:37.115654  800812 cri.go:89] found id: ""
	I1007 13:38:37.115685  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.115696  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:37.115709  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:37.115777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:37.156963  800812 cri.go:89] found id: ""
	I1007 13:38:37.157001  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.157013  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:37.157022  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:37.157080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:37.199210  800812 cri.go:89] found id: ""
	I1007 13:38:37.199243  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.199254  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:37.199263  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:37.199336  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:37.240823  800812 cri.go:89] found id: ""
	I1007 13:38:37.240868  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.240880  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:37.240889  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:37.240958  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:37.289164  800812 cri.go:89] found id: ""
	I1007 13:38:37.289192  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.289202  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:37.289210  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:37.289276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:37.330630  800812 cri.go:89] found id: ""
	I1007 13:38:37.330660  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.330669  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:37.330675  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:37.330731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:37.372401  800812 cri.go:89] found id: ""
	I1007 13:38:37.372431  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.372439  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:37.372446  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:37.372500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:37.413585  800812 cri.go:89] found id: ""
	I1007 13:38:37.413617  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.413625  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:37.413634  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:37.413646  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:37.458433  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:37.458471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:37.512720  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:37.512769  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.527774  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:37.527813  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:37.605381  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:37.605408  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:37.605422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.182809  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:40.196597  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:40.196671  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:40.236687  800812 cri.go:89] found id: ""
	I1007 13:38:40.236726  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.236738  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:40.236746  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:40.236814  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:40.271432  800812 cri.go:89] found id: ""
	I1007 13:38:40.271470  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.271479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:40.271485  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:40.271548  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:40.308972  800812 cri.go:89] found id: ""
	I1007 13:38:40.309014  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.309026  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:40.309044  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:40.309115  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:40.345363  800812 cri.go:89] found id: ""
	I1007 13:38:40.345404  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.345415  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:40.345424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:40.345506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:40.378426  800812 cri.go:89] found id: ""
	I1007 13:38:40.378457  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.378465  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:40.378471  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:40.378525  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:40.415312  800812 cri.go:89] found id: ""
	I1007 13:38:40.415349  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.415370  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:40.415379  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:40.415448  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:40.452679  800812 cri.go:89] found id: ""
	I1007 13:38:40.452715  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.452727  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:40.452735  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:40.452810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:40.490328  800812 cri.go:89] found id: ""
	I1007 13:38:40.490362  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.490371  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:40.490382  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:40.490395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.581489  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:40.581551  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:40.626827  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:40.626865  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:40.680180  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:40.680226  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:40.696284  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:40.696316  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:40.777722  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:43.278317  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:43.292099  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:43.292180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:43.329487  800812 cri.go:89] found id: ""
	I1007 13:38:43.329518  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.329527  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:43.329534  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:43.329593  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:43.367622  800812 cri.go:89] found id: ""
	I1007 13:38:43.367653  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.367665  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:43.367674  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:43.367750  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:43.403439  800812 cri.go:89] found id: ""
	I1007 13:38:43.403477  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.403491  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:43.403499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:43.403577  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:43.442974  800812 cri.go:89] found id: ""
	I1007 13:38:43.443019  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.443029  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:43.443037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:43.443102  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:43.479975  800812 cri.go:89] found id: ""
	I1007 13:38:43.480005  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.480013  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:43.480020  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:43.480091  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:43.521645  800812 cri.go:89] found id: ""
	I1007 13:38:43.521679  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.521695  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:43.521704  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:43.521763  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:43.558574  800812 cri.go:89] found id: ""
	I1007 13:38:43.558605  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.558614  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:43.558620  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:43.558687  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:43.594054  800812 cri.go:89] found id: ""
	I1007 13:38:43.594086  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.594097  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:43.594111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:43.594128  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:43.673587  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:43.673634  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:43.717642  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:43.717673  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:43.771524  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:43.771586  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:43.786726  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:43.786764  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:43.858645  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:46.359453  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:46.373401  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:46.373490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:46.414387  800812 cri.go:89] found id: ""
	I1007 13:38:46.414416  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.414425  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:46.414432  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:46.414498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:46.451704  800812 cri.go:89] found id: ""
	I1007 13:38:46.451739  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.451751  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:46.451761  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:46.451822  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:46.487607  800812 cri.go:89] found id: ""
	I1007 13:38:46.487646  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.487657  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:46.487666  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:46.487747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:46.527080  800812 cri.go:89] found id: ""
	I1007 13:38:46.527113  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.527121  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:46.527128  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:46.527182  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:46.565979  800812 cri.go:89] found id: ""
	I1007 13:38:46.566007  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.566016  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:46.566037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:46.566095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:46.604631  800812 cri.go:89] found id: ""
	I1007 13:38:46.604665  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.604674  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:46.604683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:46.604751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:46.643618  800812 cri.go:89] found id: ""
	I1007 13:38:46.643649  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.643660  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:46.643669  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:46.643741  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:46.686777  800812 cri.go:89] found id: ""
	I1007 13:38:46.686812  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.686824  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:46.686837  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:46.686853  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:46.769689  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:46.769749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:46.810903  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:46.810934  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:46.859958  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:46.860007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:46.874867  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:46.874902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:46.945267  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.446436  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:49.460403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:49.460493  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:49.498234  800812 cri.go:89] found id: ""
	I1007 13:38:49.498278  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.498290  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:49.498302  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:49.498376  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:49.539337  800812 cri.go:89] found id: ""
	I1007 13:38:49.539374  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.539386  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:49.539395  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:49.539465  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:49.580365  800812 cri.go:89] found id: ""
	I1007 13:38:49.580404  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.580415  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:49.580424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:49.580498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:49.624591  800812 cri.go:89] found id: ""
	I1007 13:38:49.624627  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.624638  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:49.624652  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:49.624726  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:49.661718  800812 cri.go:89] found id: ""
	I1007 13:38:49.661750  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.661762  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:49.661776  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:49.661846  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:49.698356  800812 cri.go:89] found id: ""
	I1007 13:38:49.698389  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.698402  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:49.698410  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:49.698477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:49.735453  800812 cri.go:89] found id: ""
	I1007 13:38:49.735486  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.735497  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:49.735505  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:49.735578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:49.779530  800812 cri.go:89] found id: ""
	I1007 13:38:49.779558  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.779567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:49.779577  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:49.779593  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:49.794020  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:49.794067  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:49.868060  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.868093  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:49.868110  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:49.946554  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:49.946599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:49.990212  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:49.990251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:52.543303  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:52.559466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:52.559535  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:52.601977  800812 cri.go:89] found id: ""
	I1007 13:38:52.602008  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.602018  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:52.602043  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:52.602104  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:52.640954  800812 cri.go:89] found id: ""
	I1007 13:38:52.640985  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.641005  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:52.641012  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:52.641067  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:52.682075  800812 cri.go:89] found id: ""
	I1007 13:38:52.682105  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.682113  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:52.682119  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:52.682184  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:52.722957  800812 cri.go:89] found id: ""
	I1007 13:38:52.722986  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.722994  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:52.723006  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:52.723062  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:52.764074  800812 cri.go:89] found id: ""
	I1007 13:38:52.764110  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.764122  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:52.764131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:52.764210  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:52.805802  800812 cri.go:89] found id: ""
	I1007 13:38:52.805830  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.805838  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:52.805844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:52.805912  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:52.846116  800812 cri.go:89] found id: ""
	I1007 13:38:52.846148  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.846157  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:52.846164  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:52.846226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:52.888666  800812 cri.go:89] found id: ""
	I1007 13:38:52.888703  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.888719  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:52.888733  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:52.888750  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:52.968131  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:52.968177  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:53.012585  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:53.012624  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:53.066638  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:53.066692  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:53.081227  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:53.081264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:53.156955  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:55.657820  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:55.672261  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:55.672349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:55.713096  800812 cri.go:89] found id: ""
	I1007 13:38:55.713124  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.713135  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:55.713143  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:55.713211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:55.748413  800812 cri.go:89] found id: ""
	I1007 13:38:55.748447  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.748457  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:55.748465  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:55.748534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:55.781376  800812 cri.go:89] found id: ""
	I1007 13:38:55.781412  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.781424  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:55.781433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:55.781502  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:55.817653  800812 cri.go:89] found id: ""
	I1007 13:38:55.817681  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.817690  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:55.817697  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:55.817767  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:55.853133  800812 cri.go:89] found id: ""
	I1007 13:38:55.853166  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.853177  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:55.853185  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:55.853255  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:55.891659  800812 cri.go:89] found id: ""
	I1007 13:38:55.891691  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.891720  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:55.891730  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:55.891794  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:55.929345  800812 cri.go:89] found id: ""
	I1007 13:38:55.929373  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.929381  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:55.929388  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:55.929461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:55.963379  800812 cri.go:89] found id: ""
	I1007 13:38:55.963410  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.963419  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:55.963428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:55.963444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:56.006795  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:56.006837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:56.060896  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:56.060942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:56.076353  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:56.076394  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:56.157464  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:56.157492  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:56.157510  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.747936  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:58.761415  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:58.761489  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:58.795181  800812 cri.go:89] found id: ""
	I1007 13:38:58.795216  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.795226  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:58.795232  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:58.795291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:58.828749  800812 cri.go:89] found id: ""
	I1007 13:38:58.828785  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.828795  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:58.828802  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:58.828865  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:58.867195  800812 cri.go:89] found id: ""
	I1007 13:38:58.867234  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.867243  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:58.867251  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:58.867311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:58.905348  800812 cri.go:89] found id: ""
	I1007 13:38:58.905387  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.905398  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:58.905407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:58.905477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:58.940553  800812 cri.go:89] found id: ""
	I1007 13:38:58.940626  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.940655  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:58.940667  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:58.940751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:58.976595  800812 cri.go:89] found id: ""
	I1007 13:38:58.976643  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.976652  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:58.976662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:58.976719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:59.014478  800812 cri.go:89] found id: ""
	I1007 13:38:59.014512  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.014521  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:59.014527  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:59.014594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:59.051337  800812 cri.go:89] found id: ""
	I1007 13:38:59.051367  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.051378  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:59.051391  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:59.051408  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:59.091689  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:59.091733  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:59.144431  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:59.144477  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:59.159436  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:59.159471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:59.256248  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:59.256277  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:59.256293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:01.846247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:01.861309  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:01.861389  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:01.898079  800812 cri.go:89] found id: ""
	I1007 13:39:01.898117  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.898129  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:01.898138  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:01.898211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:01.933905  800812 cri.go:89] found id: ""
	I1007 13:39:01.933940  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.933951  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:01.933960  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:01.934056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:01.970522  800812 cri.go:89] found id: ""
	I1007 13:39:01.970552  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.970563  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:01.970580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:01.970653  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:02.014210  800812 cri.go:89] found id: ""
	I1007 13:39:02.014245  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.014257  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:02.014265  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:02.014329  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:02.052014  800812 cri.go:89] found id: ""
	I1007 13:39:02.052053  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.052065  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:02.052073  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:02.052144  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:02.089966  800812 cri.go:89] found id: ""
	I1007 13:39:02.089998  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.090007  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:02.090014  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:02.090105  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:02.125933  800812 cri.go:89] found id: ""
	I1007 13:39:02.125970  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.125982  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:02.125991  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:02.126092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:02.163348  800812 cri.go:89] found id: ""
	I1007 13:39:02.163381  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.163394  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:02.163405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:02.163422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:02.218311  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:02.218351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:02.233345  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:02.233381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:02.308402  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:02.308425  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:02.308444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:02.387161  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:02.387207  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:04.931535  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:04.954002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:04.954100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:04.994745  800812 cri.go:89] found id: ""
	I1007 13:39:04.994783  800812 logs.go:282] 0 containers: []
	W1007 13:39:04.994795  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:04.994803  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:04.994903  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:05.031041  800812 cri.go:89] found id: ""
	I1007 13:39:05.031070  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.031078  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:05.031085  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:05.031157  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:05.075737  800812 cri.go:89] found id: ""
	I1007 13:39:05.075780  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.075788  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:05.075794  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:05.075849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:05.108984  800812 cri.go:89] found id: ""
	I1007 13:39:05.109019  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.109030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:05.109038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:05.109096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:05.145667  800812 cri.go:89] found id: ""
	I1007 13:39:05.145699  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.145707  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:05.145724  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:05.145780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:05.182742  800812 cri.go:89] found id: ""
	I1007 13:39:05.182772  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.182783  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:05.182791  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:05.182859  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:05.223674  800812 cri.go:89] found id: ""
	I1007 13:39:05.223721  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.223731  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:05.223737  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:05.223802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:05.263516  800812 cri.go:89] found id: ""
	I1007 13:39:05.263555  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.263567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:05.263581  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:05.263599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:05.345447  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:05.345493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:05.386599  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:05.386635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:05.439367  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:05.439410  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:05.455636  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:05.455671  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:05.541166  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:08.041406  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:08.056425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:08.056514  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:08.094066  800812 cri.go:89] found id: ""
	I1007 13:39:08.094098  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.094106  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:08.094113  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:08.094180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:08.136241  800812 cri.go:89] found id: ""
	I1007 13:39:08.136277  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.136289  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:08.136297  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:08.136368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:08.176917  800812 cri.go:89] found id: ""
	I1007 13:39:08.176949  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.176958  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:08.176964  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:08.177019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:08.215278  800812 cri.go:89] found id: ""
	I1007 13:39:08.215313  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.215324  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:08.215331  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:08.215386  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:08.256965  800812 cri.go:89] found id: ""
	I1007 13:39:08.257002  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.257014  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:08.257023  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:08.257093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:08.294680  800812 cri.go:89] found id: ""
	I1007 13:39:08.294716  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.294726  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:08.294736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:08.294792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:08.332832  800812 cri.go:89] found id: ""
	I1007 13:39:08.332862  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.332871  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:08.332878  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:08.332931  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:08.369893  800812 cri.go:89] found id: ""
	I1007 13:39:08.369927  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.369939  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:08.369960  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:08.369987  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:08.448286  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:08.448337  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:08.493839  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:08.493877  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:08.549319  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:08.549365  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:08.564175  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:08.564211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:08.636651  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.137682  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:11.152844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:11.152934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:11.187265  800812 cri.go:89] found id: ""
	I1007 13:39:11.187301  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.187313  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:11.187322  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:11.187384  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:11.222721  800812 cri.go:89] found id: ""
	I1007 13:39:11.222760  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.222776  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:11.222783  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:11.222842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:11.261731  800812 cri.go:89] found id: ""
	I1007 13:39:11.261765  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.261774  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:11.261781  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:11.261841  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:11.299511  800812 cri.go:89] found id: ""
	I1007 13:39:11.299541  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.299556  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:11.299563  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:11.299615  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:11.338737  800812 cri.go:89] found id: ""
	I1007 13:39:11.338776  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.338787  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:11.338793  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:11.338851  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:11.382231  800812 cri.go:89] found id: ""
	I1007 13:39:11.382267  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.382277  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:11.382284  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:11.382344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:11.436147  800812 cri.go:89] found id: ""
	I1007 13:39:11.436179  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.436188  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:11.436195  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:11.436258  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:11.477332  800812 cri.go:89] found id: ""
	I1007 13:39:11.477367  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.477380  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:11.477392  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:11.477411  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:11.531842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:11.531887  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:11.546074  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:11.546103  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:11.617435  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.617455  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:11.617470  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:11.703173  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:11.703227  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.249507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:14.263655  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:14.263740  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:14.300339  800812 cri.go:89] found id: ""
	I1007 13:39:14.300372  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.300381  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:14.300388  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:14.300441  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:14.338791  800812 cri.go:89] found id: ""
	I1007 13:39:14.338836  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.338849  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:14.338873  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:14.338960  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:14.376537  800812 cri.go:89] found id: ""
	I1007 13:39:14.376570  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.376582  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:14.376590  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:14.376648  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:14.411933  800812 cri.go:89] found id: ""
	I1007 13:39:14.411969  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.411981  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:14.411990  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:14.412057  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:14.449007  800812 cri.go:89] found id: ""
	I1007 13:39:14.449049  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.449060  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:14.449069  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:14.449129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:14.489459  800812 cri.go:89] found id: ""
	I1007 13:39:14.489495  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.489507  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:14.489516  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:14.489575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:14.529717  800812 cri.go:89] found id: ""
	I1007 13:39:14.529747  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.529756  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:14.529765  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:14.529820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:14.566093  800812 cri.go:89] found id: ""
	I1007 13:39:14.566122  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.566129  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:14.566139  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:14.566156  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:14.640009  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:14.640037  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:14.640051  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:14.726151  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:14.726201  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.771158  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:14.771195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:14.824599  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:14.824644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:17.339940  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:17.361437  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:17.361511  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:17.402518  800812 cri.go:89] found id: ""
	I1007 13:39:17.402555  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.402566  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:17.402575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:17.402645  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:17.454422  800812 cri.go:89] found id: ""
	I1007 13:39:17.454460  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.454472  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:17.454480  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:17.454552  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:17.497017  800812 cri.go:89] found id: ""
	I1007 13:39:17.497049  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.497060  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:17.497070  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:17.497142  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:17.534352  800812 cri.go:89] found id: ""
	I1007 13:39:17.534389  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.534399  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:17.534406  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:17.534461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:17.568185  800812 cri.go:89] found id: ""
	I1007 13:39:17.568216  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.568225  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:17.568232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:17.568291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:17.611138  800812 cri.go:89] found id: ""
	I1007 13:39:17.611171  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.611182  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:17.611191  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:17.611260  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:17.649494  800812 cri.go:89] found id: ""
	I1007 13:39:17.649527  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.649536  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:17.649544  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:17.649604  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:17.690104  800812 cri.go:89] found id: ""
	I1007 13:39:17.690140  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.690153  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:17.690166  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:17.690183  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:17.763419  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:17.763450  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:17.763467  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:17.841000  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:17.841050  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:17.879832  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:17.879862  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:17.932754  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:17.932796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.447864  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:20.462219  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:20.462301  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:20.499833  800812 cri.go:89] found id: ""
	I1007 13:39:20.499870  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.499881  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:20.499889  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:20.499990  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:20.538996  800812 cri.go:89] found id: ""
	I1007 13:39:20.539031  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.539043  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:20.539051  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:20.539132  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:20.575341  800812 cri.go:89] found id: ""
	I1007 13:39:20.575379  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.575391  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:20.575400  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:20.575470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:20.613527  800812 cri.go:89] found id: ""
	I1007 13:39:20.613562  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.613572  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:20.613582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:20.613657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:20.650651  800812 cri.go:89] found id: ""
	I1007 13:39:20.650686  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.650699  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:20.650709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:20.650769  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:20.689122  800812 cri.go:89] found id: ""
	I1007 13:39:20.689151  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.689160  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:20.689166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:20.689220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:20.725242  800812 cri.go:89] found id: ""
	I1007 13:39:20.725275  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.725284  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:20.725290  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:20.725348  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:20.759949  800812 cri.go:89] found id: ""
	I1007 13:39:20.759988  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.760000  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:20.760014  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:20.760028  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:20.804886  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:20.804922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:20.857652  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:20.857700  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.872182  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:20.872215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:20.945413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:20.945439  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:20.945455  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:23.521232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:23.537035  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:23.537116  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:23.580100  800812 cri.go:89] found id: ""
	I1007 13:39:23.580141  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.580154  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:23.580162  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:23.580220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:23.622271  800812 cri.go:89] found id: ""
	I1007 13:39:23.622302  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.622313  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:23.622321  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:23.622390  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:23.658290  800812 cri.go:89] found id: ""
	I1007 13:39:23.658320  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.658335  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:23.658341  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:23.658398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:23.696510  800812 cri.go:89] found id: ""
	I1007 13:39:23.696543  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.696555  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:23.696564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:23.696624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:23.732913  800812 cri.go:89] found id: ""
	I1007 13:39:23.732947  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.732967  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:23.732974  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:23.733027  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:23.774502  800812 cri.go:89] found id: ""
	I1007 13:39:23.774540  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.774550  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:23.774557  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:23.774710  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:23.821217  800812 cri.go:89] found id: ""
	I1007 13:39:23.821258  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.821269  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:23.821278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:23.821350  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:23.864327  800812 cri.go:89] found id: ""
	I1007 13:39:23.864361  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.864373  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:23.864386  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:23.864404  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:23.918454  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:23.918505  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:23.933324  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:23.933363  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:24.015858  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:24.015879  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:24.015892  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:24.096557  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:24.096609  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:26.638856  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:26.654921  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:26.654989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:26.693714  800812 cri.go:89] found id: ""
	I1007 13:39:26.693747  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.693756  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:26.693764  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:26.693819  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:26.732730  800812 cri.go:89] found id: ""
	I1007 13:39:26.732762  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.732771  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:26.732778  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:26.732837  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:26.774239  800812 cri.go:89] found id: ""
	I1007 13:39:26.774272  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.774281  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:26.774288  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:26.774352  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:26.812547  800812 cri.go:89] found id: ""
	I1007 13:39:26.812597  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.812609  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:26.812619  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:26.812676  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:26.849472  800812 cri.go:89] found id: ""
	I1007 13:39:26.849501  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.849509  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:26.849515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:26.849572  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:26.885935  800812 cri.go:89] found id: ""
	I1007 13:39:26.885965  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.885974  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:26.885981  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:26.886052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:26.920629  800812 cri.go:89] found id: ""
	I1007 13:39:26.920659  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.920668  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:26.920674  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:26.920731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:26.959016  800812 cri.go:89] found id: ""
	I1007 13:39:26.959052  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.959065  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:26.959079  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:26.959095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:27.012308  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:27.012351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:27.027559  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:27.027602  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:27.111043  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:27.111070  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:27.111086  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:27.194428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:27.194476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:29.738163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:29.752869  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:29.752959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:29.791071  800812 cri.go:89] found id: ""
	I1007 13:39:29.791102  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.791111  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:29.791128  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:29.791206  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:29.837148  800812 cri.go:89] found id: ""
	I1007 13:39:29.837194  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.837207  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:29.837217  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:29.837291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:29.874334  800812 cri.go:89] found id: ""
	I1007 13:39:29.874371  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.874383  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:29.874391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:29.874463  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:29.915799  800812 cri.go:89] found id: ""
	I1007 13:39:29.915835  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.915852  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:29.915861  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:29.915923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:29.954557  800812 cri.go:89] found id: ""
	I1007 13:39:29.954589  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.954598  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:29.954604  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:29.954661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:29.990873  800812 cri.go:89] found id: ""
	I1007 13:39:29.990912  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.990925  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:29.990934  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:29.991019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:30.031687  800812 cri.go:89] found id: ""
	I1007 13:39:30.031738  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.031751  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:30.031763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:30.031872  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:30.071524  800812 cri.go:89] found id: ""
	I1007 13:39:30.071565  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.071579  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:30.071594  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:30.071614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:30.085558  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:30.085591  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:30.162897  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:30.162922  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:30.162935  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:30.244979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:30.245029  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:30.285065  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:30.285098  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:32.838701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:32.852755  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:32.852839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:32.890012  800812 cri.go:89] found id: ""
	I1007 13:39:32.890067  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.890079  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:32.890088  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:32.890156  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:32.928467  800812 cri.go:89] found id: ""
	I1007 13:39:32.928499  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.928508  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:32.928517  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:32.928578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:32.964908  800812 cri.go:89] found id: ""
	I1007 13:39:32.964944  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.964956  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:32.964965  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:32.965096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:32.999714  800812 cri.go:89] found id: ""
	I1007 13:39:32.999747  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.999773  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:32.999782  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:32.999849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:33.037889  800812 cri.go:89] found id: ""
	I1007 13:39:33.037924  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.037934  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:33.037946  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:33.038015  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:33.076192  800812 cri.go:89] found id: ""
	I1007 13:39:33.076226  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.076234  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:33.076241  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:33.076311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:33.112402  800812 cri.go:89] found id: ""
	I1007 13:39:33.112442  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.112455  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:33.112463  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:33.112527  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:33.151872  800812 cri.go:89] found id: ""
	I1007 13:39:33.151905  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.151916  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:33.151927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:33.151942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:33.203529  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:33.203572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:33.220050  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:33.220097  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:33.304000  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:33.304030  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:33.304046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:33.383979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:33.384038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:35.929247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:35.943624  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:35.943691  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:35.980943  800812 cri.go:89] found id: ""
	I1007 13:39:35.980973  800812 logs.go:282] 0 containers: []
	W1007 13:39:35.980988  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:35.980996  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:35.981068  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:36.021834  800812 cri.go:89] found id: ""
	I1007 13:39:36.021868  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.021876  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:36.021882  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:36.021939  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:36.056651  800812 cri.go:89] found id: ""
	I1007 13:39:36.056687  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.056698  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:36.056706  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:36.056781  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:36.095332  800812 cri.go:89] found id: ""
	I1007 13:39:36.095360  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.095369  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:36.095376  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:36.095433  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:36.141361  800812 cri.go:89] found id: ""
	I1007 13:39:36.141403  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.141416  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:36.141424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:36.141485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:36.179122  800812 cri.go:89] found id: ""
	I1007 13:39:36.179155  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.179165  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:36.179171  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:36.179226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:36.212594  800812 cri.go:89] found id: ""
	I1007 13:39:36.212630  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.212642  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:36.212651  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:36.212723  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:36.253109  800812 cri.go:89] found id: ""
	I1007 13:39:36.253145  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.253156  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:36.253169  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:36.253187  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:36.327696  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:36.327729  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:36.327747  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:36.404687  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:36.404735  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:36.444913  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:36.444955  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:36.497657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:36.497711  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.013791  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:39.027274  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:39.027344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:39.061214  800812 cri.go:89] found id: ""
	I1007 13:39:39.061246  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.061254  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:39.061262  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:39.061323  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:39.096245  800812 cri.go:89] found id: ""
	I1007 13:39:39.096277  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.096288  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:39.096296  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:39.096373  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:39.137152  800812 cri.go:89] found id: ""
	I1007 13:39:39.137192  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.137204  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:39.137212  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:39.137279  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:39.172052  800812 cri.go:89] found id: ""
	I1007 13:39:39.172085  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.172094  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:39.172100  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:39.172161  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:39.208796  800812 cri.go:89] found id: ""
	I1007 13:39:39.208832  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.208843  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:39.208852  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:39.208923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:39.243568  800812 cri.go:89] found id: ""
	I1007 13:39:39.243598  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.243606  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:39.243613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:39.243669  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:39.279168  800812 cri.go:89] found id: ""
	I1007 13:39:39.279201  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.279209  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:39.279216  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:39.279276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:39.321347  800812 cri.go:89] found id: ""
	I1007 13:39:39.321373  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.321382  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:39.321391  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:39.321405  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:39.373936  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:39.373986  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.388225  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:39.388258  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:39.462454  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:39.462482  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:39.462500  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:39.545876  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:39.545931  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:42.094078  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:42.107800  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:42.107869  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:42.143781  800812 cri.go:89] found id: ""
	I1007 13:39:42.143818  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.143829  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:42.143837  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:42.143913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:42.186434  800812 cri.go:89] found id: ""
	I1007 13:39:42.186468  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.186479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:42.186490  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:42.186562  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:42.221552  800812 cri.go:89] found id: ""
	I1007 13:39:42.221588  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.221599  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:42.221608  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:42.221682  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:42.255536  800812 cri.go:89] found id: ""
	I1007 13:39:42.255574  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.255586  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:42.255593  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:42.255662  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:42.290067  800812 cri.go:89] found id: ""
	I1007 13:39:42.290103  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.290114  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:42.290126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:42.290197  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:42.326182  800812 cri.go:89] found id: ""
	I1007 13:39:42.326215  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.326225  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:42.326232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:42.326287  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:42.360560  800812 cri.go:89] found id: ""
	I1007 13:39:42.360594  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.360606  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:42.360616  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:42.360683  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:42.396242  800812 cri.go:89] found id: ""
	I1007 13:39:42.396270  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.396280  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:42.396291  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:42.396308  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.448101  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:42.448160  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:42.462617  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:42.462648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:42.541262  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:42.541288  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:42.541306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:42.617009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:42.617052  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.157272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:45.171699  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:45.171777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:45.213274  800812 cri.go:89] found id: ""
	I1007 13:39:45.213311  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.213322  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:45.213331  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:45.213393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:45.252304  800812 cri.go:89] found id: ""
	I1007 13:39:45.252339  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.252348  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:45.252355  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:45.252408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:45.289702  800812 cri.go:89] found id: ""
	I1007 13:39:45.289739  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.289751  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:45.289758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:45.289824  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:45.325776  800812 cri.go:89] found id: ""
	I1007 13:39:45.325815  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.325827  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:45.325836  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:45.325909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:45.362636  800812 cri.go:89] found id: ""
	I1007 13:39:45.362672  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.362683  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:45.362692  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:45.362764  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:45.405058  800812 cri.go:89] found id: ""
	I1007 13:39:45.405090  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.405100  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:45.405108  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:45.405174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:45.439752  800812 cri.go:89] found id: ""
	I1007 13:39:45.439783  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.439793  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:45.439802  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:45.439866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:45.476336  800812 cri.go:89] found id: ""
	I1007 13:39:45.476369  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.476377  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:45.476388  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:45.476402  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:45.489707  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:45.489739  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:45.564593  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:45.564626  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:45.564645  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:45.639136  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:45.639184  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.684415  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:45.684458  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:48.245534  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:48.260357  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:48.260425  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:48.297561  800812 cri.go:89] found id: ""
	I1007 13:39:48.297591  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.297599  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:48.297606  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:48.297661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:48.332654  800812 cri.go:89] found id: ""
	I1007 13:39:48.332694  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.332705  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:48.332715  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:48.332783  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:48.370775  800812 cri.go:89] found id: ""
	I1007 13:39:48.370818  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.370829  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:48.370837  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:48.370895  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:48.409282  800812 cri.go:89] found id: ""
	I1007 13:39:48.409318  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.409329  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:48.409338  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:48.409415  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:48.448602  800812 cri.go:89] found id: ""
	I1007 13:39:48.448634  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.448642  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:48.448648  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:48.448702  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:48.483527  800812 cri.go:89] found id: ""
	I1007 13:39:48.483556  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.483565  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:48.483572  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:48.483627  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:48.519600  800812 cri.go:89] found id: ""
	I1007 13:39:48.519636  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.519645  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:48.519657  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:48.519725  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:48.559446  800812 cri.go:89] found id: ""
	I1007 13:39:48.559481  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.559493  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:48.559505  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:48.559523  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:48.575824  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:48.575879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:48.660033  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:48.660067  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:48.660083  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:48.738011  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:48.738077  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:48.781399  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:48.781439  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:51.333296  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:51.346939  800812 kubeadm.go:597] duration metric: took 4m4.08487661s to restartPrimaryControlPlane
	W1007 13:39:51.347039  800812 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:51.347070  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:51.822215  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:51.841443  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:51.854663  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:51.868065  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:51.868079  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:51.868140  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:51.879052  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:51.879133  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:51.889979  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:51.901929  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:51.902007  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:51.912958  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.923420  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:51.923492  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.934307  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:51.944066  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:51.944138  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:51.954170  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:52.028915  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:39:52.028973  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:52.180138  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:52.180312  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:52.180457  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:39:52.377920  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:52.379989  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:52.380160  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:52.380267  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:52.380407  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:52.380499  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:52.380607  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:52.380700  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:52.381700  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:52.382420  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:52.383189  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:52.384091  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:52.384191  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:52.384372  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:52.769185  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:52.870841  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:52.958399  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:53.168169  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:53.192475  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:53.193447  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:53.193519  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:53.355310  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:53.358443  800812 out.go:235]   - Booting up control plane ...
	I1007 13:39:53.358593  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:53.365515  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:53.366449  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:53.367325  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:53.369598  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:40:33.370670  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:40:33.371065  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:33.371255  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:38.371494  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:38.371681  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:48.371961  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:48.372225  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:08.372715  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:08.372913  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.374723  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:48.375006  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.375034  800812 kubeadm.go:310] 
	I1007 13:41:48.375075  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:41:48.375132  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:41:48.375140  800812 kubeadm.go:310] 
	I1007 13:41:48.375183  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:41:48.375231  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:41:48.375369  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:41:48.375392  800812 kubeadm.go:310] 
	I1007 13:41:48.375514  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:41:48.375568  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:41:48.375617  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:41:48.375626  800812 kubeadm.go:310] 
	I1007 13:41:48.375747  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:41:48.375877  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:41:48.375895  800812 kubeadm.go:310] 
	I1007 13:41:48.376053  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:41:48.376140  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:41:48.376211  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:41:48.376288  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:41:48.376302  800812 kubeadm.go:310] 
	I1007 13:41:48.376705  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:48.376830  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:41:48.376948  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:41:48.377115  800812 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:41:48.377169  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:41:48.848117  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:48.863751  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:48.874610  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:48.874642  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:48.874715  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:48.886201  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:48.886279  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:48.897494  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:48.908398  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:48.908481  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:48.921409  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.931814  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:48.931882  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.943484  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:48.955060  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:48.955245  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:48.966391  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:49.042441  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:41:49.042521  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:49.203488  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:49.203603  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:49.203715  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:41:49.410381  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:49.412411  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:49.412520  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:49.412591  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:49.412694  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:49.412816  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:49.412940  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:49.412999  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:49.413053  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:49.413105  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:49.413196  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:49.413283  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:49.413319  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:49.413396  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:49.634922  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:49.724221  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:49.804768  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:49.980061  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:50.000515  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:50.000858  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:50.001053  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:50.163951  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:50.166163  800812 out.go:235]   - Booting up control plane ...
	I1007 13:41:50.166331  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:50.180837  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:50.181963  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:50.184140  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:50.190548  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:42:30.192477  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:42:30.192790  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:30.193025  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:35.193544  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:35.193820  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:45.194245  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:45.194449  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:05.194833  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:05.195103  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194317  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:45.194637  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194670  800812 kubeadm.go:310] 
	I1007 13:43:45.194721  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:43:45.194779  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:43:45.194789  800812 kubeadm.go:310] 
	I1007 13:43:45.194832  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:43:45.194873  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:43:45.195053  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:43:45.195079  800812 kubeadm.go:310] 
	I1007 13:43:45.195219  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:43:45.195259  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:43:45.195300  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:43:45.195309  800812 kubeadm.go:310] 
	I1007 13:43:45.195434  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:43:45.195533  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:43:45.195542  800812 kubeadm.go:310] 
	I1007 13:43:45.195691  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:43:45.195814  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:43:45.195912  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:43:45.196007  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:43:45.196018  800812 kubeadm.go:310] 
	I1007 13:43:45.196865  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:43:45.197021  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:43:45.197130  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:43:45.197242  800812 kubeadm.go:394] duration metric: took 7m57.99434545s to StartCluster
	I1007 13:43:45.197299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:43:45.197368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:43:45.245334  800812 cri.go:89] found id: ""
	I1007 13:43:45.245369  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.245380  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:43:45.245390  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:43:45.245464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:43:45.287324  800812 cri.go:89] found id: ""
	I1007 13:43:45.287363  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.287375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:43:45.287384  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:43:45.287464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:43:45.323565  800812 cri.go:89] found id: ""
	I1007 13:43:45.323606  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.323619  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:43:45.323627  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:43:45.323708  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:43:45.365920  800812 cri.go:89] found id: ""
	I1007 13:43:45.365955  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.365967  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:43:45.365976  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:43:45.366052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:43:45.409136  800812 cri.go:89] found id: ""
	I1007 13:43:45.409177  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.409189  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:43:45.409199  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:43:45.409268  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:43:45.455631  800812 cri.go:89] found id: ""
	I1007 13:43:45.455667  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.455676  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:43:45.455683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:43:45.455746  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:43:45.512092  800812 cri.go:89] found id: ""
	I1007 13:43:45.512134  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.512146  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:43:45.512155  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:43:45.512223  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:43:45.561541  800812 cri.go:89] found id: ""
	I1007 13:43:45.561579  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.561592  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:43:45.561614  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:43:45.561635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:43:45.609728  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:43:45.609765  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:43:45.662962  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:43:45.663007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:43:45.680441  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:43:45.680496  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:43:45.768165  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:43:45.768198  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:43:45.768214  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:43:45.889172  800812 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:43:45.889245  800812 out.go:270] * 
	* 
	W1007 13:43:45.889310  800812 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.889324  800812 out.go:270] * 
	* 
	W1007 13:43:45.890214  800812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:43:45.893670  800812 out.go:201] 
	W1007 13:43:45.895121  800812 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.895161  800812 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:43:45.895184  800812 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:43:45.896672  800812 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-120978 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (256.878556ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-120978 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:26 UTC |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-016701             | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-653322            | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-120978        | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:38:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:38:32.108474  802960 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:38:32.108648  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108659  802960 out.go:358] Setting ErrFile to fd 2...
	I1007 13:38:32.108665  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108864  802960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:38:32.109477  802960 out.go:352] Setting JSON to false
	I1007 13:38:32.110672  802960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12061,"bootTime":1728296251,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:38:32.110773  802960 start.go:139] virtualization: kvm guest
	I1007 13:38:32.113566  802960 out.go:177] * [default-k8s-diff-port-489319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:38:32.115580  802960 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:38:32.115627  802960 notify.go:220] Checking for updates...
	I1007 13:38:32.118464  802960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:38:32.120173  802960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:38:32.121799  802960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:38:32.123382  802960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:38:32.125020  802960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:38:29.209336  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:31.212514  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:32.126861  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:38:32.127255  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.127337  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.143671  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1007 13:38:32.144158  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.144820  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.144844  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.145206  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.145416  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.145655  802960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:38:32.146010  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.146112  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.161508  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1007 13:38:32.162082  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.162517  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.162541  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.162886  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.163112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.200281  802960 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:38:32.201380  802960 start.go:297] selected driver: kvm2
	I1007 13:38:32.201393  802960 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.201499  802960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:38:32.202260  802960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.202353  802960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:38:32.218742  802960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:38:32.219129  802960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:38:32.219168  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:38:32.219221  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:38:32.219254  802960 start.go:340] cluster config:
	{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.219380  802960 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.222273  802960 out.go:177] * Starting "default-k8s-diff-port-489319" primary control-plane node in "default-k8s-diff-port-489319" cluster
	I1007 13:38:32.223750  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:38:32.223801  802960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:38:32.223816  802960 cache.go:56] Caching tarball of preloaded images
	I1007 13:38:32.223891  802960 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:38:32.223901  802960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:38:32.223997  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:38:32.224208  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:38:32.224280  802960 start.go:364] duration metric: took 38.73µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:38:32.224297  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:38:32.224303  802960 fix.go:54] fixHost starting: 
	I1007 13:38:32.224637  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.224674  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.239368  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I1007 13:38:32.239869  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.240386  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.240409  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.240813  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.241063  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.241228  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:38:32.243196  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Running err=<nil>
	W1007 13:38:32.243217  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:38:32.245881  802960 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-489319" VM ...
	I1007 13:38:30.514797  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:33.014487  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:30.891736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:30.891810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:30.926900  800812 cri.go:89] found id: ""
	I1007 13:38:30.926934  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.926945  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:30.926953  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:30.927020  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:30.962704  800812 cri.go:89] found id: ""
	I1007 13:38:30.962742  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.962760  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:30.962769  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:30.962839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:31.000947  800812 cri.go:89] found id: ""
	I1007 13:38:31.000986  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.000999  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:31.001009  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:31.001079  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:31.040687  800812 cri.go:89] found id: ""
	I1007 13:38:31.040734  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.040743  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:31.040750  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:31.040808  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:31.077841  800812 cri.go:89] found id: ""
	I1007 13:38:31.077872  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.077891  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:31.077900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:31.077975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:31.128590  800812 cri.go:89] found id: ""
	I1007 13:38:31.128625  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.128638  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:31.128736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:31.128947  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:31.170110  800812 cri.go:89] found id: ""
	I1007 13:38:31.170140  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.170149  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:31.170157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:31.170231  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:31.229262  800812 cri.go:89] found id: ""
	I1007 13:38:31.229297  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.229310  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:31.229327  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:31.229343  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:31.281680  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:31.281727  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:31.296076  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:31.296111  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:31.367443  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:31.367468  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:31.367488  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:31.449882  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:31.449933  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:33.993958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:34.007064  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:34.007150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:34.043479  800812 cri.go:89] found id: ""
	I1007 13:38:34.043517  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.043529  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:34.043537  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:34.043609  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:34.080953  800812 cri.go:89] found id: ""
	I1007 13:38:34.081006  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.081019  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:34.081028  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:34.081100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:34.117708  800812 cri.go:89] found id: ""
	I1007 13:38:34.117741  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.117749  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:34.117756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:34.117823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:34.154457  800812 cri.go:89] found id: ""
	I1007 13:38:34.154487  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.154499  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:34.154507  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:34.154586  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:34.192037  800812 cri.go:89] found id: ""
	I1007 13:38:34.192070  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.192080  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:34.192088  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:34.192159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:34.230404  800812 cri.go:89] found id: ""
	I1007 13:38:34.230441  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.230453  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:34.230461  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:34.230529  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:34.266650  800812 cri.go:89] found id: ""
	I1007 13:38:34.266712  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.266726  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:34.266736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:34.266832  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:34.302731  800812 cri.go:89] found id: ""
	I1007 13:38:34.302767  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.302784  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:34.302807  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:34.302828  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:34.377367  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:34.377400  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:34.377417  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:34.453185  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:34.453232  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:34.498235  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:34.498269  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:34.548177  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:34.548224  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:32.247486  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:38:32.247524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.247949  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:38:32.250961  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:38:32.251539  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251823  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:38:32.252088  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252375  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:38:32.252944  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:38:32.253182  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:38:32.253197  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:38:35.122367  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:33.709093  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.709691  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.514611  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:38.014557  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:37.065875  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:37.079772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:37.079868  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:37.115654  800812 cri.go:89] found id: ""
	I1007 13:38:37.115685  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.115696  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:37.115709  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:37.115777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:37.156963  800812 cri.go:89] found id: ""
	I1007 13:38:37.157001  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.157013  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:37.157022  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:37.157080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:37.199210  800812 cri.go:89] found id: ""
	I1007 13:38:37.199243  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.199254  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:37.199263  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:37.199336  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:37.240823  800812 cri.go:89] found id: ""
	I1007 13:38:37.240868  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.240880  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:37.240889  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:37.240958  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:37.289164  800812 cri.go:89] found id: ""
	I1007 13:38:37.289192  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.289202  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:37.289210  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:37.289276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:37.330630  800812 cri.go:89] found id: ""
	I1007 13:38:37.330660  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.330669  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:37.330675  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:37.330731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:37.372401  800812 cri.go:89] found id: ""
	I1007 13:38:37.372431  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.372439  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:37.372446  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:37.372500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:37.413585  800812 cri.go:89] found id: ""
	I1007 13:38:37.413617  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.413625  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:37.413634  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:37.413646  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:37.458433  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:37.458471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:37.512720  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:37.512769  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.527774  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:37.527813  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:37.605381  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:37.605408  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:37.605422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.182809  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:40.196597  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:40.196671  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:40.236687  800812 cri.go:89] found id: ""
	I1007 13:38:40.236726  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.236738  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:40.236746  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:40.236814  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:40.271432  800812 cri.go:89] found id: ""
	I1007 13:38:40.271470  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.271479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:40.271485  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:40.271548  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:40.308972  800812 cri.go:89] found id: ""
	I1007 13:38:40.309014  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.309026  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:40.309044  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:40.309115  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:40.345363  800812 cri.go:89] found id: ""
	I1007 13:38:40.345404  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.345415  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:40.345424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:40.345506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:40.378426  800812 cri.go:89] found id: ""
	I1007 13:38:40.378457  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.378465  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:40.378471  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:40.378525  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:40.415312  800812 cri.go:89] found id: ""
	I1007 13:38:40.415349  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.415370  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:40.415379  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:40.415448  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:40.452679  800812 cri.go:89] found id: ""
	I1007 13:38:40.452715  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.452727  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:40.452735  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:40.452810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:40.490328  800812 cri.go:89] found id: ""
	I1007 13:38:40.490362  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.490371  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:40.490382  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:40.490395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.581489  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:40.581551  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:40.626827  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:40.626865  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:40.680180  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:40.680226  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:40.696284  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:40.696316  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:40.777722  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:38.198306  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:37.710573  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.210415  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.516522  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.013328  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.278317  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:43.292099  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:43.292180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:43.329487  800812 cri.go:89] found id: ""
	I1007 13:38:43.329518  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.329527  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:43.329534  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:43.329593  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:43.367622  800812 cri.go:89] found id: ""
	I1007 13:38:43.367653  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.367665  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:43.367674  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:43.367750  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:43.403439  800812 cri.go:89] found id: ""
	I1007 13:38:43.403477  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.403491  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:43.403499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:43.403577  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:43.442974  800812 cri.go:89] found id: ""
	I1007 13:38:43.443019  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.443029  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:43.443037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:43.443102  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:43.479975  800812 cri.go:89] found id: ""
	I1007 13:38:43.480005  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.480013  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:43.480020  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:43.480091  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:43.521645  800812 cri.go:89] found id: ""
	I1007 13:38:43.521679  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.521695  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:43.521704  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:43.521763  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:43.558574  800812 cri.go:89] found id: ""
	I1007 13:38:43.558605  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.558614  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:43.558620  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:43.558687  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:43.594054  800812 cri.go:89] found id: ""
	I1007 13:38:43.594086  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.594097  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:43.594111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:43.594128  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:43.673587  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:43.673634  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:43.717642  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:43.717673  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:43.771524  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:43.771586  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:43.786726  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:43.786764  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:43.858645  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:44.274468  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:42.709396  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:44.709744  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.711052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:45.015094  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:47.513659  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:49.515994  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.359453  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:46.373401  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:46.373490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:46.414387  800812 cri.go:89] found id: ""
	I1007 13:38:46.414416  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.414425  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:46.414432  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:46.414498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:46.451704  800812 cri.go:89] found id: ""
	I1007 13:38:46.451739  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.451751  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:46.451761  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:46.451822  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:46.487607  800812 cri.go:89] found id: ""
	I1007 13:38:46.487646  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.487657  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:46.487666  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:46.487747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:46.527080  800812 cri.go:89] found id: ""
	I1007 13:38:46.527113  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.527121  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:46.527128  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:46.527182  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:46.565979  800812 cri.go:89] found id: ""
	I1007 13:38:46.566007  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.566016  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:46.566037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:46.566095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:46.604631  800812 cri.go:89] found id: ""
	I1007 13:38:46.604665  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.604674  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:46.604683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:46.604751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:46.643618  800812 cri.go:89] found id: ""
	I1007 13:38:46.643649  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.643660  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:46.643669  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:46.643741  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:46.686777  800812 cri.go:89] found id: ""
	I1007 13:38:46.686812  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.686824  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:46.686837  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:46.686853  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:46.769689  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:46.769749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:46.810903  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:46.810934  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:46.859958  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:46.860007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:46.874867  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:46.874902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:46.945267  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.446436  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:49.460403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:49.460493  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:49.498234  800812 cri.go:89] found id: ""
	I1007 13:38:49.498278  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.498290  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:49.498302  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:49.498376  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:49.539337  800812 cri.go:89] found id: ""
	I1007 13:38:49.539374  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.539386  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:49.539395  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:49.539465  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:49.580365  800812 cri.go:89] found id: ""
	I1007 13:38:49.580404  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.580415  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:49.580424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:49.580498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:49.624591  800812 cri.go:89] found id: ""
	I1007 13:38:49.624627  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.624638  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:49.624652  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:49.624726  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:49.661718  800812 cri.go:89] found id: ""
	I1007 13:38:49.661750  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.661762  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:49.661776  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:49.661846  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:49.698356  800812 cri.go:89] found id: ""
	I1007 13:38:49.698389  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.698402  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:49.698410  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:49.698477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:49.735453  800812 cri.go:89] found id: ""
	I1007 13:38:49.735486  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.735497  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:49.735505  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:49.735578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:49.779530  800812 cri.go:89] found id: ""
	I1007 13:38:49.779558  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.779567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:49.779577  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:49.779593  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:49.794020  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:49.794067  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:49.868060  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.868093  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:49.868110  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:49.946554  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:49.946599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:49.990212  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:49.990251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:47.346399  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:49.208303  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:51.209295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.013939  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:54.514863  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.543303  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:52.559466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:52.559535  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:52.601977  800812 cri.go:89] found id: ""
	I1007 13:38:52.602008  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.602018  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:52.602043  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:52.602104  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:52.640954  800812 cri.go:89] found id: ""
	I1007 13:38:52.640985  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.641005  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:52.641012  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:52.641067  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:52.682075  800812 cri.go:89] found id: ""
	I1007 13:38:52.682105  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.682113  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:52.682119  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:52.682184  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:52.722957  800812 cri.go:89] found id: ""
	I1007 13:38:52.722986  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.722994  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:52.723006  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:52.723062  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:52.764074  800812 cri.go:89] found id: ""
	I1007 13:38:52.764110  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.764122  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:52.764131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:52.764210  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:52.805802  800812 cri.go:89] found id: ""
	I1007 13:38:52.805830  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.805838  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:52.805844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:52.805912  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:52.846116  800812 cri.go:89] found id: ""
	I1007 13:38:52.846148  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.846157  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:52.846164  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:52.846226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:52.888666  800812 cri.go:89] found id: ""
	I1007 13:38:52.888703  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.888719  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:52.888733  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:52.888750  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:52.968131  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:52.968177  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:53.012585  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:53.012624  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:53.066638  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:53.066692  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:53.081227  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:53.081264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:53.156955  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:55.657820  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:55.672261  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:55.672349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:55.713096  800812 cri.go:89] found id: ""
	I1007 13:38:55.713124  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.713135  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:55.713143  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:55.713211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:55.748413  800812 cri.go:89] found id: ""
	I1007 13:38:55.748447  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.748457  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:55.748465  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:55.748534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:55.781376  800812 cri.go:89] found id: ""
	I1007 13:38:55.781412  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.781424  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:55.781433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:55.781502  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:55.817653  800812 cri.go:89] found id: ""
	I1007 13:38:55.817681  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.817690  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:55.817697  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:55.817767  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:55.853133  800812 cri.go:89] found id: ""
	I1007 13:38:55.853166  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.853177  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:55.853185  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:55.853255  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:53.426353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:56.498332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:53.709052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.710245  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:57.014521  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:59.020215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.891659  800812 cri.go:89] found id: ""
	I1007 13:38:55.891691  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.891720  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:55.891730  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:55.891794  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:55.929345  800812 cri.go:89] found id: ""
	I1007 13:38:55.929373  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.929381  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:55.929388  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:55.929461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:55.963379  800812 cri.go:89] found id: ""
	I1007 13:38:55.963410  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.963419  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:55.963428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:55.963444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:56.006795  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:56.006837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:56.060896  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:56.060942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:56.076353  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:56.076394  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:56.157464  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:56.157492  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:56.157510  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.747936  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:58.761415  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:58.761489  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:58.795181  800812 cri.go:89] found id: ""
	I1007 13:38:58.795216  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.795226  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:58.795232  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:58.795291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:58.828749  800812 cri.go:89] found id: ""
	I1007 13:38:58.828785  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.828795  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:58.828802  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:58.828865  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:58.867195  800812 cri.go:89] found id: ""
	I1007 13:38:58.867234  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.867243  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:58.867251  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:58.867311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:58.905348  800812 cri.go:89] found id: ""
	I1007 13:38:58.905387  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.905398  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:58.905407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:58.905477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:58.940553  800812 cri.go:89] found id: ""
	I1007 13:38:58.940626  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.940655  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:58.940667  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:58.940751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:58.976595  800812 cri.go:89] found id: ""
	I1007 13:38:58.976643  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.976652  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:58.976662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:58.976719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:59.014478  800812 cri.go:89] found id: ""
	I1007 13:38:59.014512  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.014521  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:59.014527  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:59.014594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:59.051337  800812 cri.go:89] found id: ""
	I1007 13:38:59.051367  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.051378  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:59.051391  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:59.051408  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:59.091689  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:59.091733  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:59.144431  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:59.144477  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:59.159436  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:59.159471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:59.256248  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:59.256277  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:59.256293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.208916  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:00.210007  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.514807  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:04.015032  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.846247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:01.861309  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:01.861389  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:01.898079  800812 cri.go:89] found id: ""
	I1007 13:39:01.898117  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.898129  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:01.898138  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:01.898211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:01.933905  800812 cri.go:89] found id: ""
	I1007 13:39:01.933940  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.933951  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:01.933960  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:01.934056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:01.970522  800812 cri.go:89] found id: ""
	I1007 13:39:01.970552  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.970563  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:01.970580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:01.970653  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:02.014210  800812 cri.go:89] found id: ""
	I1007 13:39:02.014245  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.014257  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:02.014265  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:02.014329  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:02.052014  800812 cri.go:89] found id: ""
	I1007 13:39:02.052053  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.052065  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:02.052073  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:02.052144  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:02.089966  800812 cri.go:89] found id: ""
	I1007 13:39:02.089998  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.090007  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:02.090014  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:02.090105  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:02.125933  800812 cri.go:89] found id: ""
	I1007 13:39:02.125970  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.125982  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:02.125991  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:02.126092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:02.163348  800812 cri.go:89] found id: ""
	I1007 13:39:02.163381  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.163394  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:02.163405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:02.163422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:02.218311  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:02.218351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:02.233345  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:02.233381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:02.308402  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:02.308425  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:02.308444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:02.387161  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:02.387207  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:04.931535  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:04.954002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:04.954100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:04.994745  800812 cri.go:89] found id: ""
	I1007 13:39:04.994783  800812 logs.go:282] 0 containers: []
	W1007 13:39:04.994795  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:04.994803  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:04.994903  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:05.031041  800812 cri.go:89] found id: ""
	I1007 13:39:05.031070  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.031078  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:05.031085  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:05.031157  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:05.075737  800812 cri.go:89] found id: ""
	I1007 13:39:05.075780  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.075788  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:05.075794  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:05.075849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:05.108984  800812 cri.go:89] found id: ""
	I1007 13:39:05.109019  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.109030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:05.109038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:05.109096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:05.145667  800812 cri.go:89] found id: ""
	I1007 13:39:05.145699  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.145707  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:05.145724  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:05.145780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:05.182742  800812 cri.go:89] found id: ""
	I1007 13:39:05.182772  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.182783  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:05.182791  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:05.182859  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:05.223674  800812 cri.go:89] found id: ""
	I1007 13:39:05.223721  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.223731  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:05.223737  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:05.223802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:05.263516  800812 cri.go:89] found id: ""
	I1007 13:39:05.263555  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.263567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:05.263581  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:05.263599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:05.345447  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:05.345493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:05.386599  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:05.386635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:05.439367  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:05.439410  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:05.455636  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:05.455671  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:05.541166  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:05.618355  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:02.709614  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:05.211295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:06.514215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.515091  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.041406  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:08.056425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:08.056514  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:08.094066  800812 cri.go:89] found id: ""
	I1007 13:39:08.094098  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.094106  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:08.094113  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:08.094180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:08.136241  800812 cri.go:89] found id: ""
	I1007 13:39:08.136277  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.136289  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:08.136297  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:08.136368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:08.176917  800812 cri.go:89] found id: ""
	I1007 13:39:08.176949  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.176958  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:08.176964  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:08.177019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:08.215278  800812 cri.go:89] found id: ""
	I1007 13:39:08.215313  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.215324  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:08.215331  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:08.215386  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:08.256965  800812 cri.go:89] found id: ""
	I1007 13:39:08.257002  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.257014  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:08.257023  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:08.257093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:08.294680  800812 cri.go:89] found id: ""
	I1007 13:39:08.294716  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.294726  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:08.294736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:08.294792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:08.332832  800812 cri.go:89] found id: ""
	I1007 13:39:08.332862  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.332871  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:08.332878  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:08.332931  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:08.369893  800812 cri.go:89] found id: ""
	I1007 13:39:08.369927  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.369939  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:08.369960  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:08.369987  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:08.448286  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:08.448337  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:08.493839  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:08.493877  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:08.549319  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:08.549365  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:08.564175  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:08.564211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:08.636651  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:08.690293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:07.709699  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:10.208983  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.014066  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:13.014936  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.137682  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:11.152844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:11.152934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:11.187265  800812 cri.go:89] found id: ""
	I1007 13:39:11.187301  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.187313  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:11.187322  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:11.187384  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:11.222721  800812 cri.go:89] found id: ""
	I1007 13:39:11.222760  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.222776  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:11.222783  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:11.222842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:11.261731  800812 cri.go:89] found id: ""
	I1007 13:39:11.261765  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.261774  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:11.261781  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:11.261841  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:11.299511  800812 cri.go:89] found id: ""
	I1007 13:39:11.299541  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.299556  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:11.299563  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:11.299615  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:11.338737  800812 cri.go:89] found id: ""
	I1007 13:39:11.338776  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.338787  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:11.338793  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:11.338851  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:11.382231  800812 cri.go:89] found id: ""
	I1007 13:39:11.382267  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.382277  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:11.382284  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:11.382344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:11.436147  800812 cri.go:89] found id: ""
	I1007 13:39:11.436179  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.436188  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:11.436195  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:11.436258  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:11.477332  800812 cri.go:89] found id: ""
	I1007 13:39:11.477367  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.477380  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:11.477392  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:11.477411  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:11.531842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:11.531887  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:11.546074  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:11.546103  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:11.617435  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.617455  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:11.617470  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:11.703173  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:11.703227  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.249507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:14.263655  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:14.263740  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:14.300339  800812 cri.go:89] found id: ""
	I1007 13:39:14.300372  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.300381  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:14.300388  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:14.300441  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:14.338791  800812 cri.go:89] found id: ""
	I1007 13:39:14.338836  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.338849  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:14.338873  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:14.338960  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:14.376537  800812 cri.go:89] found id: ""
	I1007 13:39:14.376570  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.376582  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:14.376590  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:14.376648  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:14.411933  800812 cri.go:89] found id: ""
	I1007 13:39:14.411969  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.411981  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:14.411990  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:14.412057  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:14.449007  800812 cri.go:89] found id: ""
	I1007 13:39:14.449049  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.449060  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:14.449069  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:14.449129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:14.489459  800812 cri.go:89] found id: ""
	I1007 13:39:14.489495  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.489507  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:14.489516  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:14.489575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:14.529717  800812 cri.go:89] found id: ""
	I1007 13:39:14.529747  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.529756  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:14.529765  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:14.529820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:14.566093  800812 cri.go:89] found id: ""
	I1007 13:39:14.566122  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.566129  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:14.566139  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:14.566156  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:14.640009  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:14.640037  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:14.640051  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:14.726151  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:14.726201  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.771158  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:14.771195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:14.824599  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:14.824644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:14.774418  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:12.209697  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:14.710276  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:15.514317  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.514843  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.339940  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:17.361437  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:17.361511  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:17.402518  800812 cri.go:89] found id: ""
	I1007 13:39:17.402555  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.402566  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:17.402575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:17.402645  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:17.454422  800812 cri.go:89] found id: ""
	I1007 13:39:17.454460  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.454472  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:17.454480  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:17.454552  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:17.497017  800812 cri.go:89] found id: ""
	I1007 13:39:17.497049  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.497060  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:17.497070  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:17.497142  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:17.534352  800812 cri.go:89] found id: ""
	I1007 13:39:17.534389  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.534399  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:17.534406  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:17.534461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:17.568185  800812 cri.go:89] found id: ""
	I1007 13:39:17.568216  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.568225  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:17.568232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:17.568291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:17.611138  800812 cri.go:89] found id: ""
	I1007 13:39:17.611171  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.611182  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:17.611191  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:17.611260  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:17.649494  800812 cri.go:89] found id: ""
	I1007 13:39:17.649527  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.649536  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:17.649544  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:17.649604  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:17.690104  800812 cri.go:89] found id: ""
	I1007 13:39:17.690140  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.690153  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:17.690166  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:17.690183  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:17.763419  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:17.763450  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:17.763467  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:17.841000  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:17.841050  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:17.879832  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:17.879862  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:17.932754  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:17.932796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.447864  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:20.462219  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:20.462301  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:20.499833  800812 cri.go:89] found id: ""
	I1007 13:39:20.499870  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.499881  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:20.499889  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:20.499990  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:20.538996  800812 cri.go:89] found id: ""
	I1007 13:39:20.539031  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.539043  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:20.539051  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:20.539132  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:20.575341  800812 cri.go:89] found id: ""
	I1007 13:39:20.575379  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.575391  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:20.575400  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:20.575470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:20.613527  800812 cri.go:89] found id: ""
	I1007 13:39:20.613562  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.613572  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:20.613582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:20.613657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:20.650651  800812 cri.go:89] found id: ""
	I1007 13:39:20.650686  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.650699  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:20.650709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:20.650769  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:20.689122  800812 cri.go:89] found id: ""
	I1007 13:39:20.689151  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.689160  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:20.689166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:20.689220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:20.725242  800812 cri.go:89] found id: ""
	I1007 13:39:20.725275  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.725284  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:20.725290  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:20.725348  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:20.759949  800812 cri.go:89] found id: ""
	I1007 13:39:20.759988  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.760000  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:20.760014  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:20.760028  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:20.804886  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:20.804922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:20.857652  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:20.857700  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.872182  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:20.872215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:39:17.842234  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:17.210309  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:19.210449  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:21.709672  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:20.014047  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:22.014646  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:24.015649  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	W1007 13:39:20.945413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:20.945439  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:20.945455  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:23.521232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:23.537035  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:23.537116  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:23.580100  800812 cri.go:89] found id: ""
	I1007 13:39:23.580141  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.580154  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:23.580162  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:23.580220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:23.622271  800812 cri.go:89] found id: ""
	I1007 13:39:23.622302  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.622313  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:23.622321  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:23.622390  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:23.658290  800812 cri.go:89] found id: ""
	I1007 13:39:23.658320  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.658335  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:23.658341  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:23.658398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:23.696510  800812 cri.go:89] found id: ""
	I1007 13:39:23.696543  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.696555  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:23.696564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:23.696624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:23.732913  800812 cri.go:89] found id: ""
	I1007 13:39:23.732947  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.732967  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:23.732974  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:23.733027  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:23.774502  800812 cri.go:89] found id: ""
	I1007 13:39:23.774540  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.774550  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:23.774557  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:23.774710  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:23.821217  800812 cri.go:89] found id: ""
	I1007 13:39:23.821258  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.821269  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:23.821278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:23.821350  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:23.864327  800812 cri.go:89] found id: ""
	I1007 13:39:23.864361  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.864373  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:23.864386  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:23.864404  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:23.918454  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:23.918505  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:23.933324  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:23.933363  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:24.015858  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:24.015879  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:24.015892  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:24.096557  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:24.096609  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:23.926328  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:26.994313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:24.203346  800212 pod_ready.go:82] duration metric: took 4m0.00074454s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" ...
	E1007 13:39:24.203420  800212 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:39:24.203447  800212 pod_ready.go:39] duration metric: took 4m15.010484686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:39:24.203483  800212 kubeadm.go:597] duration metric: took 4m22.534978235s to restartPrimaryControlPlane
	W1007 13:39:24.203568  800212 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:24.203597  800212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:26.018248  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:28.513858  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:26.638856  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:26.654921  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:26.654989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:26.693714  800812 cri.go:89] found id: ""
	I1007 13:39:26.693747  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.693756  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:26.693764  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:26.693819  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:26.732730  800812 cri.go:89] found id: ""
	I1007 13:39:26.732762  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.732771  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:26.732778  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:26.732837  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:26.774239  800812 cri.go:89] found id: ""
	I1007 13:39:26.774272  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.774281  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:26.774288  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:26.774352  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:26.812547  800812 cri.go:89] found id: ""
	I1007 13:39:26.812597  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.812609  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:26.812619  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:26.812676  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:26.849472  800812 cri.go:89] found id: ""
	I1007 13:39:26.849501  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.849509  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:26.849515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:26.849572  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:26.885935  800812 cri.go:89] found id: ""
	I1007 13:39:26.885965  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.885974  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:26.885981  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:26.886052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:26.920629  800812 cri.go:89] found id: ""
	I1007 13:39:26.920659  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.920668  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:26.920674  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:26.920731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:26.959016  800812 cri.go:89] found id: ""
	I1007 13:39:26.959052  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.959065  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:26.959079  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:26.959095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:27.012308  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:27.012351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:27.027559  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:27.027602  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:27.111043  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:27.111070  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:27.111086  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:27.194428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:27.194476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:29.738163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:29.752869  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:29.752959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:29.791071  800812 cri.go:89] found id: ""
	I1007 13:39:29.791102  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.791111  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:29.791128  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:29.791206  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:29.837148  800812 cri.go:89] found id: ""
	I1007 13:39:29.837194  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.837207  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:29.837217  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:29.837291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:29.874334  800812 cri.go:89] found id: ""
	I1007 13:39:29.874371  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.874383  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:29.874391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:29.874463  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:29.915799  800812 cri.go:89] found id: ""
	I1007 13:39:29.915835  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.915852  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:29.915861  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:29.915923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:29.954557  800812 cri.go:89] found id: ""
	I1007 13:39:29.954589  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.954598  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:29.954604  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:29.954661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:29.990873  800812 cri.go:89] found id: ""
	I1007 13:39:29.990912  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.990925  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:29.990934  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:29.991019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:30.031687  800812 cri.go:89] found id: ""
	I1007 13:39:30.031738  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.031751  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:30.031763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:30.031872  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:30.071524  800812 cri.go:89] found id: ""
	I1007 13:39:30.071565  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.071579  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:30.071594  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:30.071614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:30.085558  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:30.085591  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:30.162897  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:30.162922  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:30.162935  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:30.244979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:30.245029  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:30.285065  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:30.285098  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:30.513894  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:33.013867  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:32.838701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:32.852755  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:32.852839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:32.890012  800812 cri.go:89] found id: ""
	I1007 13:39:32.890067  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.890079  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:32.890088  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:32.890156  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:32.928467  800812 cri.go:89] found id: ""
	I1007 13:39:32.928499  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.928508  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:32.928517  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:32.928578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:32.964908  800812 cri.go:89] found id: ""
	I1007 13:39:32.964944  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.964956  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:32.964965  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:32.965096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:32.999714  800812 cri.go:89] found id: ""
	I1007 13:39:32.999747  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.999773  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:32.999782  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:32.999849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:33.037889  800812 cri.go:89] found id: ""
	I1007 13:39:33.037924  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.037934  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:33.037946  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:33.038015  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:33.076192  800812 cri.go:89] found id: ""
	I1007 13:39:33.076226  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.076234  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:33.076241  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:33.076311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:33.112402  800812 cri.go:89] found id: ""
	I1007 13:39:33.112442  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.112455  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:33.112463  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:33.112527  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:33.151872  800812 cri.go:89] found id: ""
	I1007 13:39:33.151905  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.151916  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:33.151927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:33.151942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:33.203529  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:33.203572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:33.220050  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:33.220097  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:33.304000  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:33.304030  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:33.304046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:33.383979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:33.384038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:33.074393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:36.146280  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:35.015200  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:37.514925  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:35.929247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:35.943624  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:35.943691  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:35.980943  800812 cri.go:89] found id: ""
	I1007 13:39:35.980973  800812 logs.go:282] 0 containers: []
	W1007 13:39:35.980988  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:35.980996  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:35.981068  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:36.021834  800812 cri.go:89] found id: ""
	I1007 13:39:36.021868  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.021876  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:36.021882  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:36.021939  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:36.056651  800812 cri.go:89] found id: ""
	I1007 13:39:36.056687  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.056698  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:36.056706  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:36.056781  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:36.095332  800812 cri.go:89] found id: ""
	I1007 13:39:36.095360  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.095369  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:36.095376  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:36.095433  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:36.141361  800812 cri.go:89] found id: ""
	I1007 13:39:36.141403  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.141416  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:36.141424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:36.141485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:36.179122  800812 cri.go:89] found id: ""
	I1007 13:39:36.179155  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.179165  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:36.179171  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:36.179226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:36.212594  800812 cri.go:89] found id: ""
	I1007 13:39:36.212630  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.212642  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:36.212651  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:36.212723  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:36.253109  800812 cri.go:89] found id: ""
	I1007 13:39:36.253145  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.253156  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:36.253169  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:36.253187  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:36.327696  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:36.327729  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:36.327747  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:36.404687  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:36.404735  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:36.444913  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:36.444955  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:36.497657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:36.497711  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.013791  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:39.027274  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:39.027344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:39.061214  800812 cri.go:89] found id: ""
	I1007 13:39:39.061246  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.061254  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:39.061262  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:39.061323  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:39.096245  800812 cri.go:89] found id: ""
	I1007 13:39:39.096277  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.096288  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:39.096296  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:39.096373  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:39.137152  800812 cri.go:89] found id: ""
	I1007 13:39:39.137192  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.137204  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:39.137212  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:39.137279  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:39.172052  800812 cri.go:89] found id: ""
	I1007 13:39:39.172085  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.172094  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:39.172100  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:39.172161  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:39.208796  800812 cri.go:89] found id: ""
	I1007 13:39:39.208832  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.208843  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:39.208852  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:39.208923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:39.243568  800812 cri.go:89] found id: ""
	I1007 13:39:39.243598  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.243606  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:39.243613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:39.243669  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:39.279168  800812 cri.go:89] found id: ""
	I1007 13:39:39.279201  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.279209  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:39.279216  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:39.279276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:39.321347  800812 cri.go:89] found id: ""
	I1007 13:39:39.321373  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.321382  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:39.321391  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:39.321405  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:39.373936  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:39.373986  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.388225  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:39.388258  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:39.462454  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:39.462482  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:39.462500  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:39.545876  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:39.545931  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:40.015715  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.514458  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.094078  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:42.107800  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:42.107869  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:42.143781  800812 cri.go:89] found id: ""
	I1007 13:39:42.143818  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.143829  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:42.143837  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:42.143913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:42.186434  800812 cri.go:89] found id: ""
	I1007 13:39:42.186468  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.186479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:42.186490  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:42.186562  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:42.221552  800812 cri.go:89] found id: ""
	I1007 13:39:42.221588  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.221599  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:42.221608  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:42.221682  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:42.255536  800812 cri.go:89] found id: ""
	I1007 13:39:42.255574  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.255586  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:42.255593  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:42.255662  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:42.290067  800812 cri.go:89] found id: ""
	I1007 13:39:42.290103  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.290114  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:42.290126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:42.290197  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:42.326182  800812 cri.go:89] found id: ""
	I1007 13:39:42.326215  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.326225  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:42.326232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:42.326287  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:42.360560  800812 cri.go:89] found id: ""
	I1007 13:39:42.360594  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.360606  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:42.360616  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:42.360683  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:42.396242  800812 cri.go:89] found id: ""
	I1007 13:39:42.396270  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.396280  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:42.396291  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:42.396308  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.448101  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:42.448160  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:42.462617  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:42.462648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:42.541262  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:42.541288  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:42.541306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:42.617009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:42.617052  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.157272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:45.171699  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:45.171777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:45.213274  800812 cri.go:89] found id: ""
	I1007 13:39:45.213311  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.213322  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:45.213331  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:45.213393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:45.252304  800812 cri.go:89] found id: ""
	I1007 13:39:45.252339  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.252348  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:45.252355  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:45.252408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:45.289702  800812 cri.go:89] found id: ""
	I1007 13:39:45.289739  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.289751  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:45.289758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:45.289824  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:45.325776  800812 cri.go:89] found id: ""
	I1007 13:39:45.325815  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.325827  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:45.325836  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:45.325909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:45.362636  800812 cri.go:89] found id: ""
	I1007 13:39:45.362672  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.362683  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:45.362692  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:45.362764  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:45.405058  800812 cri.go:89] found id: ""
	I1007 13:39:45.405090  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.405100  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:45.405108  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:45.405174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:45.439752  800812 cri.go:89] found id: ""
	I1007 13:39:45.439783  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.439793  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:45.439802  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:45.439866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:45.476336  800812 cri.go:89] found id: ""
	I1007 13:39:45.476369  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.476377  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:45.476388  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:45.476402  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:45.489707  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:45.489739  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:45.564593  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:45.564626  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:45.564645  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:45.639136  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:45.639184  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.684415  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:45.684458  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.226242  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.298298  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.013741  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:47.014360  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:49.015110  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:48.245534  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:48.260357  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:48.260425  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:48.297561  800812 cri.go:89] found id: ""
	I1007 13:39:48.297591  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.297599  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:48.297606  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:48.297661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:48.332654  800812 cri.go:89] found id: ""
	I1007 13:39:48.332694  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.332705  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:48.332715  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:48.332783  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:48.370775  800812 cri.go:89] found id: ""
	I1007 13:39:48.370818  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.370829  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:48.370837  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:48.370895  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:48.409282  800812 cri.go:89] found id: ""
	I1007 13:39:48.409318  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.409329  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:48.409338  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:48.409415  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:48.448602  800812 cri.go:89] found id: ""
	I1007 13:39:48.448634  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.448642  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:48.448648  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:48.448702  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:48.483527  800812 cri.go:89] found id: ""
	I1007 13:39:48.483556  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.483565  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:48.483572  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:48.483627  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:48.519600  800812 cri.go:89] found id: ""
	I1007 13:39:48.519636  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.519645  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:48.519657  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:48.519725  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:48.559446  800812 cri.go:89] found id: ""
	I1007 13:39:48.559481  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.559493  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:48.559505  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:48.559523  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:48.575824  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:48.575879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:48.660033  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:48.660067  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:48.660083  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:48.738011  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:48.738077  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:48.781399  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:48.781439  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:50.616036  800212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.41240969s)
	I1007 13:39:50.616124  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:50.638334  800212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:50.654214  800212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:50.672345  800212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:50.672370  800212 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:50.672429  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:50.699073  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:50.699139  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:50.711774  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:50.737818  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:50.737885  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:50.749603  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.760893  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:50.760965  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.771572  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:50.781793  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:50.781856  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:50.793541  800212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:50.851411  800212 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:39:50.851486  800212 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:50.967773  800212 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:50.967938  800212 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:50.968105  800212 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:39:50.976935  800212 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:51.378305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:50.979096  800212 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:50.979227  800212 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:50.979291  800212 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:50.979375  800212 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:50.979467  800212 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:50.979560  800212 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:50.979634  800212 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:50.979717  800212 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:50.979789  800212 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:50.979857  800212 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:50.979925  800212 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:50.979959  800212 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:50.980011  800212 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:51.280206  800212 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:51.430988  800212 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:39:51.677074  800212 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:51.867985  800212 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:52.283613  800212 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:52.284108  800212 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:52.288874  800212 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.333296  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:51.346939  800812 kubeadm.go:597] duration metric: took 4m4.08487661s to restartPrimaryControlPlane
	W1007 13:39:51.347039  800812 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:51.347070  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:51.822215  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:51.841443  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:51.854663  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:51.868065  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:51.868079  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:51.868140  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:51.879052  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:51.879133  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:51.889979  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:51.901929  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:51.902007  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:51.912958  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.923420  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:51.923492  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.934307  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:51.944066  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:51.944138  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:51.954170  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:52.028915  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:39:52.028973  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:52.180138  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:52.180312  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:52.180457  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:39:52.377920  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:52.379989  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:52.380160  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:52.380267  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:52.380407  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:52.380499  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:52.380607  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:52.380700  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:52.381700  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:52.382420  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:52.383189  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:52.384091  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:52.384191  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:52.384372  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:52.769185  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:52.870841  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:52.958399  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:53.168169  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:53.192475  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:53.193447  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:53.193519  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:53.355310  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.514892  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.515960  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.358443  800812 out.go:235]   - Booting up control plane ...
	I1007 13:39:53.358593  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:53.365515  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:53.366449  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:53.367325  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:53.369598  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:39:54.454391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:52.290945  800212 out.go:235]   - Booting up control plane ...
	I1007 13:39:52.291058  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:52.291164  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:52.291610  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:52.312059  800212 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:52.318321  800212 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:52.318412  800212 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:52.456671  800212 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:39:52.456802  800212 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:39:52.958340  800212 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.579104ms
	I1007 13:39:52.958484  800212 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:39:57.959379  800212 kubeadm.go:310] [api-check] The API server is healthy after 5.001260012s
	I1007 13:39:57.980499  800212 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:39:57.999006  800212 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:39:58.043754  800212 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:39:58.044050  800212 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-653322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:39:58.062167  800212 kubeadm.go:310] [bootstrap-token] Using token: 72a6vd.dmbcvepur9l2dhmv
	I1007 13:39:58.064163  800212 out.go:235]   - Configuring RBAC rules ...
	I1007 13:39:58.064326  800212 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:39:58.079082  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:39:58.094414  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:39:58.099862  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:39:58.109846  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:39:58.122572  800212 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:39:58.370342  800212 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:39:58.808645  800212 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:39:59.367759  800212 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:39:59.368708  800212 kubeadm.go:310] 
	I1007 13:39:59.368834  800212 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:39:59.368859  800212 kubeadm.go:310] 
	I1007 13:39:59.368976  800212 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:39:59.368991  800212 kubeadm.go:310] 
	I1007 13:39:59.369031  800212 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:39:59.369111  800212 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:39:59.369155  800212 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:39:59.369162  800212 kubeadm.go:310] 
	I1007 13:39:59.369217  800212 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:39:59.369245  800212 kubeadm.go:310] 
	I1007 13:39:59.369317  800212 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:39:59.369329  800212 kubeadm.go:310] 
	I1007 13:39:59.369390  800212 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:39:59.369487  800212 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:39:59.369588  800212 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:39:59.369600  800212 kubeadm.go:310] 
	I1007 13:39:59.369722  800212 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:39:59.369826  800212 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:39:59.369838  800212 kubeadm.go:310] 
	I1007 13:39:59.369960  800212 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370113  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:39:59.370151  800212 kubeadm.go:310] 	--control-plane 
	I1007 13:39:59.370160  800212 kubeadm.go:310] 
	I1007 13:39:59.370302  800212 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:39:59.370331  800212 kubeadm.go:310] 
	I1007 13:39:59.370458  800212 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370592  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:39:59.371701  800212 kubeadm.go:310] W1007 13:39:50.802353    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372082  800212 kubeadm.go:310] W1007 13:39:50.803265    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372217  800212 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:39:59.372252  800212 cni.go:84] Creating CNI manager for ""
	I1007 13:39:59.372266  800212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:39:59.374383  800212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:39:56.015201  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:58.517383  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:00.534326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:59.376063  800212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:39:59.389097  800212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:39:59.409782  800212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:39:59.409864  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:39:59.409859  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-653322 minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=embed-certs-653322 minikube.k8s.io/primary=true
	I1007 13:39:59.451756  800212 ops.go:34] apiserver oom_adj: -16
	I1007 13:39:59.647019  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.147361  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.647505  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.147866  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.647444  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.147271  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.647066  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.147382  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.647825  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.796730  800212 kubeadm.go:1113] duration metric: took 4.386947643s to wait for elevateKubeSystemPrivileges
	I1007 13:40:03.796776  800212 kubeadm.go:394] duration metric: took 5m2.178460784s to StartCluster
	I1007 13:40:03.796802  800212 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.796927  800212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:40:03.800809  800212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.801152  800212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:40:03.801235  800212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:40:03.801341  800212 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-653322"
	I1007 13:40:03.801366  800212 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-653322"
	W1007 13:40:03.801374  800212 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:40:03.801380  800212 addons.go:69] Setting default-storageclass=true in profile "embed-certs-653322"
	I1007 13:40:03.801397  800212 addons.go:69] Setting metrics-server=true in profile "embed-certs-653322"
	I1007 13:40:03.801418  800212 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:40:03.801428  800212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-653322"
	I1007 13:40:03.801442  800212 addons.go:234] Setting addon metrics-server=true in "embed-certs-653322"
	W1007 13:40:03.801452  800212 addons.go:243] addon metrics-server should already be in state true
	I1007 13:40:03.801485  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801411  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801854  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801895  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801901  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.801908  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801937  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.802059  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.803364  800212 out.go:177] * Verifying Kubernetes components...
	I1007 13:40:03.805464  800212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:40:03.820021  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I1007 13:40:03.820297  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1007 13:40:03.820632  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.820812  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.821460  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821482  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.821598  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I1007 13:40:03.821627  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821639  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.822131  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822377  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.822388  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822769  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822823  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.822938  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822990  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.823583  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.823609  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.824011  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.824209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.828672  800212 addons.go:234] Setting addon default-storageclass=true in "embed-certs-653322"
	W1007 13:40:03.828697  800212 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:40:03.828731  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.829118  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.829169  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.839251  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1007 13:40:03.839800  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.840506  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.840533  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.840894  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.841130  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.842660  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1007 13:40:03.843181  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.843235  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.843819  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.843841  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.844191  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.844469  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.845247  800212 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:40:03.846191  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.846688  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:40:03.846712  800212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:40:03.846737  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.847801  800212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:40:01.015857  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.515782  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.849482  800212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:03.849504  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:40:03.849528  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.851923  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852765  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.852798  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852987  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.853209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.853367  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.853482  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.854532  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I1007 13:40:03.854540  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855100  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.855127  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855438  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.855484  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.855836  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.856149  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.856179  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.856258  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.856436  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.856791  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.857523  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.857572  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.873780  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I1007 13:40:03.874162  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.874943  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.874958  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.875358  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.875581  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.877658  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.877924  800212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:03.877940  800212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:40:03.877962  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.881764  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882241  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.882272  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882619  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.882839  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.882999  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.883146  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:04.059493  800212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:40:04.092602  800212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135614  800212 node_ready.go:49] node "embed-certs-653322" has status "Ready":"True"
	I1007 13:40:04.135639  800212 node_ready.go:38] duration metric: took 42.999262ms for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135649  800212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:04.168633  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:04.177323  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:04.206431  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:04.358331  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:40:04.358360  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:40:04.453932  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:40:04.453978  800212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:40:04.543045  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:04.543079  800212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:40:04.628016  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:05.373199  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166722968s)
	I1007 13:40:05.373269  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373286  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373188  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195822413s)
	I1007 13:40:05.373374  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373395  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373726  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373746  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373756  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373764  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373772  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.373786  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373798  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373810  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373819  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.374033  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374019  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374056  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.374077  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374104  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374123  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.449400  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.449435  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.449768  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.449785  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034194  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.406118465s)
	I1007 13:40:06.034270  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034292  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034583  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034603  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034613  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034620  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034852  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:06.034920  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034951  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034967  800212 addons.go:475] Verifying addon metrics-server=true in "embed-certs-653322"
	I1007 13:40:06.036901  800212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:40:03.602357  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:06.038108  800212 addons.go:510] duration metric: took 2.236891318s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:40:06.178973  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:06.015270  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.514554  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.675453  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:10.182593  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.182620  800212 pod_ready.go:82] duration metric: took 6.013956349s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.182630  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189183  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.189216  800212 pod_ready.go:82] duration metric: took 6.578623ms for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189229  800212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195272  800212 pod_ready.go:93] pod "etcd-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.195298  800212 pod_ready.go:82] duration metric: took 6.06024ms for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195308  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203341  800212 pod_ready.go:93] pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.203365  800212 pod_ready.go:82] duration metric: took 8.050464ms for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203375  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209333  800212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.209364  800212 pod_ready.go:82] duration metric: took 5.980877ms for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209377  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573541  800212 pod_ready.go:93] pod "kube-proxy-z9r92" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.573574  800212 pod_ready.go:82] duration metric: took 364.188673ms for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573586  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973294  800212 pod_ready.go:93] pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.973325  800212 pod_ready.go:82] duration metric: took 399.732244ms for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973334  800212 pod_ready.go:39] duration metric: took 6.837673484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:10.973354  800212 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:40:10.973424  800212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:40:10.989629  800212 api_server.go:72] duration metric: took 7.188432004s to wait for apiserver process to appear ...
	I1007 13:40:10.989661  800212 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:40:10.989690  800212 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I1007 13:40:10.994679  800212 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I1007 13:40:10.995855  800212 api_server.go:141] control plane version: v1.31.1
	I1007 13:40:10.995882  800212 api_server.go:131] duration metric: took 6.212413ms to wait for apiserver health ...
	I1007 13:40:10.995894  800212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:40:11.176174  800212 system_pods.go:59] 9 kube-system pods found
	I1007 13:40:11.176207  800212 system_pods.go:61] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.176213  800212 system_pods.go:61] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.176217  800212 system_pods.go:61] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.176221  800212 system_pods.go:61] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.176225  800212 system_pods.go:61] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.176228  800212 system_pods.go:61] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.176231  800212 system_pods.go:61] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.176236  800212 system_pods.go:61] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.176240  800212 system_pods.go:61] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.176251  800212 system_pods.go:74] duration metric: took 180.350309ms to wait for pod list to return data ...
	I1007 13:40:11.176258  800212 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:40:11.374362  800212 default_sa.go:45] found service account: "default"
	I1007 13:40:11.374397  800212 default_sa.go:55] duration metric: took 198.130993ms for default service account to be created ...
	I1007 13:40:11.374410  800212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:40:11.577087  800212 system_pods.go:86] 9 kube-system pods found
	I1007 13:40:11.577124  800212 system_pods.go:89] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.577130  800212 system_pods.go:89] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.577134  800212 system_pods.go:89] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.577138  800212 system_pods.go:89] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.577141  800212 system_pods.go:89] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.577145  800212 system_pods.go:89] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.577149  800212 system_pods.go:89] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.577157  800212 system_pods.go:89] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.577161  800212 system_pods.go:89] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.577171  800212 system_pods.go:126] duration metric: took 202.754732ms to wait for k8s-apps to be running ...
	I1007 13:40:11.577179  800212 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:40:11.577228  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:40:11.595122  800212 system_svc.go:56] duration metric: took 17.926197ms WaitForService to wait for kubelet
	I1007 13:40:11.595159  800212 kubeadm.go:582] duration metric: took 7.793966621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:40:11.595189  800212 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:40:11.774788  800212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:40:11.774819  800212 node_conditions.go:123] node cpu capacity is 2
	I1007 13:40:11.774833  800212 node_conditions.go:105] duration metric: took 179.638486ms to run NodePressure ...
	I1007 13:40:11.774845  800212 start.go:241] waiting for startup goroutines ...
	I1007 13:40:11.774852  800212 start.go:246] waiting for cluster config update ...
	I1007 13:40:11.774862  800212 start.go:255] writing updated cluster config ...
	I1007 13:40:11.775199  800212 ssh_runner.go:195] Run: rm -f paused
	I1007 13:40:11.829109  800212 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:40:11.831389  800212 out.go:177] * Done! kubectl is now configured to use "embed-certs-653322" cluster and "default" namespace by default
	I1007 13:40:09.682305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:11.014595  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:13.514109  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:12.754391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:16.015105  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.513935  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.834414  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.906376  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.015129  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:23.518245  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:26.014981  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:28.513904  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:27.986365  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.058375  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.015269  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.514729  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.370670  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:40:33.371065  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:33.371255  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:36.013424  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.014881  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.507584  800087 pod_ready.go:82] duration metric: took 4m0.000325195s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" ...
	E1007 13:40:38.507633  800087 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:40:38.507657  800087 pod_ready.go:39] duration metric: took 4m14.542185527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:38.507694  800087 kubeadm.go:597] duration metric: took 4m21.215120888s to restartPrimaryControlPlane
	W1007 13:40:38.507784  800087 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:40:38.507824  800087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:40:38.371494  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:38.371681  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:37.138368  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:40.210391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:46.290312  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:48.371961  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:48.372225  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:49.362313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:55.442333  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:58.514279  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:04.757708  800087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.249856079s)
	I1007 13:41:04.757796  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:04.787393  800087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:41:04.805311  800087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:04.819815  800087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:04.819839  800087 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:04.819889  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:04.832607  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:04.832673  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:04.847624  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:04.859808  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:04.859890  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:04.886041  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.896677  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:04.896746  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.906688  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:04.915884  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:04.915965  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:04.925767  800087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:04.981704  800087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:41:04.981799  800087 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:05.104530  800087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:05.104648  800087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:05.104750  800087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:41:05.114782  800087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:05.116948  800087 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:05.117074  800087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:05.117168  800087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:05.117275  800087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:05.117358  800087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:05.117447  800087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:05.117522  800087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:05.117620  800087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:05.117733  800087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:05.117851  800087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:05.117961  800087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:05.118055  800087 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:05.118147  800087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:05.216990  800087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:05.548814  800087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:41:05.921322  800087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:06.206950  800087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:06.412087  800087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:06.412698  800087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:06.415768  800087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:04.598286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:06.418055  800087 out.go:235]   - Booting up control plane ...
	I1007 13:41:06.418195  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:06.419324  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:06.420095  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:06.437974  800087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:06.447497  800087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:06.447580  800087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:06.582080  800087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:41:06.582223  800087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:41:07.583021  800087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001204833s
	I1007 13:41:07.583165  800087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:41:08.372715  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:08.372913  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:07.666427  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:13.085728  800087 kubeadm.go:310] [api-check] The API server is healthy after 5.502732546s
	I1007 13:41:13.105047  800087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:41:13.122083  800087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:41:13.157464  800087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:41:13.157751  800087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-016701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:41:13.176062  800087 kubeadm.go:310] [bootstrap-token] Using token: ott6bx.mfcul37ilsfpftrr
	I1007 13:41:13.177574  800087 out.go:235]   - Configuring RBAC rules ...
	I1007 13:41:13.177739  800087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:41:13.184629  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:41:13.200989  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:41:13.206521  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:41:13.212338  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:41:13.217063  800087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:41:13.493012  800087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:41:13.926154  800087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:41:14.500818  800087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:41:14.500844  800087 kubeadm.go:310] 
	I1007 13:41:14.500894  800087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:41:14.500899  800087 kubeadm.go:310] 
	I1007 13:41:14.500988  800087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:41:14.501001  800087 kubeadm.go:310] 
	I1007 13:41:14.501041  800087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:41:14.501095  800087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:41:14.501196  800087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:41:14.501223  800087 kubeadm.go:310] 
	I1007 13:41:14.501307  800087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:41:14.501316  800087 kubeadm.go:310] 
	I1007 13:41:14.501379  800087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:41:14.501448  800087 kubeadm.go:310] 
	I1007 13:41:14.501533  800087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:41:14.501629  800087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:41:14.501733  800087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:41:14.501750  800087 kubeadm.go:310] 
	I1007 13:41:14.501854  800087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:41:14.501964  800087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:41:14.501973  800087 kubeadm.go:310] 
	I1007 13:41:14.502109  800087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502269  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:41:14.502311  800087 kubeadm.go:310] 	--control-plane 
	I1007 13:41:14.502322  800087 kubeadm.go:310] 
	I1007 13:41:14.502443  800087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:41:14.502453  800087 kubeadm.go:310] 
	I1007 13:41:14.502600  800087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502755  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:41:14.503943  800087 kubeadm.go:310] W1007 13:41:04.948448    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504331  800087 kubeadm.go:310] W1007 13:41:04.949311    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504448  800087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:14.504466  800087 cni.go:84] Creating CNI manager for ""
	I1007 13:41:14.504474  800087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:41:14.506680  800087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:41:14.508369  800087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:41:14.520414  800087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:41:14.544669  800087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:41:14.544833  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:14.544851  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-016701 minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=no-preload-016701 minikube.k8s.io/primary=true
	I1007 13:41:14.772594  800087 ops.go:34] apiserver oom_adj: -16
	I1007 13:41:14.772619  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:13.746372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:16.822393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:15.273211  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:15.772786  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.273580  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.773395  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.272868  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.773484  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.273717  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.405010  800087 kubeadm.go:1113] duration metric: took 3.86025273s to wait for elevateKubeSystemPrivileges
	I1007 13:41:18.405055  800087 kubeadm.go:394] duration metric: took 5m1.164485599s to StartCluster
	I1007 13:41:18.405081  800087 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.405182  800087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:41:18.406935  800087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.407244  800087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.197 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:41:18.407398  800087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:41:18.407513  800087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-016701"
	I1007 13:41:18.407539  800087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-016701"
	W1007 13:41:18.407549  800087 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:41:18.407548  800087 addons.go:69] Setting default-storageclass=true in profile "no-preload-016701"
	I1007 13:41:18.407572  800087 addons.go:69] Setting metrics-server=true in profile "no-preload-016701"
	I1007 13:41:18.407615  800087 addons.go:234] Setting addon metrics-server=true in "no-preload-016701"
	W1007 13:41:18.407721  800087 addons.go:243] addon metrics-server should already be in state true
	I1007 13:41:18.407850  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407591  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407545  800087 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:41:18.407594  800087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-016701"
	I1007 13:41:18.408374  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408387  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408417  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408424  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408447  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408542  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.409406  800087 out.go:177] * Verifying Kubernetes components...
	I1007 13:41:18.411018  800087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:41:18.425614  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I1007 13:41:18.426275  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.426764  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I1007 13:41:18.426926  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.426956  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427308  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.427410  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.427840  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.427862  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427976  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.428024  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.428257  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.428470  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.428478  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I1007 13:41:18.428980  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.429578  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.429605  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.429927  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.430564  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.430602  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.431895  800087 addons.go:234] Setting addon default-storageclass=true in "no-preload-016701"
	W1007 13:41:18.431918  800087 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:41:18.431952  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.432279  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.432319  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.445003  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1007 13:41:18.445514  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.445968  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1007 13:41:18.446101  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.446125  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.446534  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.446580  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.446821  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.447159  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.447187  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.447559  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.447754  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.449595  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.450543  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.452177  800087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:41:18.452788  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I1007 13:41:18.453311  800087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:41:18.453332  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.454421  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.454443  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.454767  800087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.454791  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:41:18.454813  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.454902  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.455260  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:41:18.455277  800087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:41:18.455293  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.455514  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.455574  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.458904  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459133  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459321  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459529  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459681  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459699  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459704  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.459849  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.459962  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459994  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.460161  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.460349  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.460480  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.495484  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1007 13:41:18.496027  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.496790  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.496828  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.497324  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.497537  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.499178  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.499425  800087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.499440  800087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:41:18.499457  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.502808  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503337  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.503363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503573  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.503796  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.503972  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.504135  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.607501  800087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:41:18.631538  800087 node_ready.go:35] waiting up to 6m0s for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645041  800087 node_ready.go:49] node "no-preload-016701" has status "Ready":"True"
	I1007 13:41:18.645065  800087 node_ready.go:38] duration metric: took 13.492405ms for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645076  800087 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:18.651831  800087 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:18.689502  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.714363  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:41:18.714386  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:41:18.738095  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.794344  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:41:18.794384  800087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:41:18.906848  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:18.906886  800087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:41:18.991553  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:19.434333  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434360  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434687  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.434701  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434710  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434716  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434932  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434987  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435004  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.435015  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434993  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435269  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435274  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435282  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.435290  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.435297  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.436889  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.436909  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.456678  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.456714  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.457112  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.457133  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.457164  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.382548  800087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.390945966s)
	I1007 13:41:20.382614  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.382628  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.382952  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383052  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383068  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.383077  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.383010  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.383354  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383370  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383384  800087 addons.go:475] Verifying addon metrics-server=true in "no-preload-016701"
	I1007 13:41:20.385366  800087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:41:20.386603  800087 addons.go:510] duration metric: took 1.979211294s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:41:20.665725  800087 pod_ready.go:103] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"False"
	I1007 13:41:22.158063  800087 pod_ready.go:93] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:22.158090  800087 pod_ready.go:82] duration metric: took 3.506228901s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:22.158100  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165304  800087 pod_ready.go:93] pod "kube-apiserver-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.165330  800087 pod_ready.go:82] duration metric: took 2.007223213s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165340  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172907  800087 pod_ready.go:93] pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.172930  800087 pod_ready.go:82] duration metric: took 7.583143ms for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172939  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180216  800087 pod_ready.go:93] pod "kube-proxy-bjqg2" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.180243  800087 pod_ready.go:82] duration metric: took 7.297732ms for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180255  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185080  800087 pod_ready.go:93] pod "kube-scheduler-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.185108  800087 pod_ready.go:82] duration metric: took 4.84549ms for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185119  800087 pod_ready.go:39] duration metric: took 5.540032302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:24.185141  800087 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:41:24.185197  800087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:41:24.201360  800087 api_server.go:72] duration metric: took 5.794073168s to wait for apiserver process to appear ...
	I1007 13:41:24.201464  800087 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:41:24.201496  800087 api_server.go:253] Checking apiserver healthz at https://192.168.39.197:8443/healthz ...
	I1007 13:41:24.207141  800087 api_server.go:279] https://192.168.39.197:8443/healthz returned 200:
	ok
	I1007 13:41:24.208456  800087 api_server.go:141] control plane version: v1.31.1
	I1007 13:41:24.208481  800087 api_server.go:131] duration metric: took 7.007742ms to wait for apiserver health ...
	I1007 13:41:24.208491  800087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:41:24.213660  800087 system_pods.go:59] 9 kube-system pods found
	I1007 13:41:24.213693  800087 system_pods.go:61] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213701  800087 system_pods.go:61] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213711  800087 system_pods.go:61] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.213716  800087 system_pods.go:61] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.213719  800087 system_pods.go:61] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.213722  800087 system_pods.go:61] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.213725  800087 system_pods.go:61] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.213730  800087 system_pods.go:61] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.213734  800087 system_pods.go:61] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.213742  800087 system_pods.go:74] duration metric: took 5.244677ms to wait for pod list to return data ...
	I1007 13:41:24.213749  800087 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:41:24.216891  800087 default_sa.go:45] found service account: "default"
	I1007 13:41:24.216923  800087 default_sa.go:55] duration metric: took 3.165762ms for default service account to be created ...
	I1007 13:41:24.216936  800087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:41:24.366926  800087 system_pods.go:86] 9 kube-system pods found
	I1007 13:41:24.366962  800087 system_pods.go:89] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366970  800087 system_pods.go:89] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366977  800087 system_pods.go:89] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.366982  800087 system_pods.go:89] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.366986  800087 system_pods.go:89] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.366990  800087 system_pods.go:89] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.366993  800087 system_pods.go:89] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.366998  800087 system_pods.go:89] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.367001  800087 system_pods.go:89] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.367011  800087 system_pods.go:126] duration metric: took 150.068129ms to wait for k8s-apps to be running ...
	I1007 13:41:24.367018  800087 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:41:24.367064  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:24.383197  800087 system_svc.go:56] duration metric: took 16.165166ms WaitForService to wait for kubelet
	I1007 13:41:24.383232  800087 kubeadm.go:582] duration metric: took 5.975954284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:41:24.383256  800087 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:41:24.563433  800087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:41:24.563469  800087 node_conditions.go:123] node cpu capacity is 2
	I1007 13:41:24.563486  800087 node_conditions.go:105] duration metric: took 180.224622ms to run NodePressure ...
	I1007 13:41:24.563503  800087 start.go:241] waiting for startup goroutines ...
	I1007 13:41:24.563514  800087 start.go:246] waiting for cluster config update ...
	I1007 13:41:24.563529  800087 start.go:255] writing updated cluster config ...
	I1007 13:41:24.563898  800087 ssh_runner.go:195] Run: rm -f paused
	I1007 13:41:24.619289  800087 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:41:24.621527  800087 out.go:177] * Done! kubectl is now configured to use "no-preload-016701" cluster and "default" namespace by default
	I1007 13:41:22.898326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:25.970388  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:32.050353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:35.122329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:41.202320  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:44.274335  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:48.374723  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:48.375006  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.375034  800812 kubeadm.go:310] 
	I1007 13:41:48.375075  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:41:48.375132  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:41:48.375140  800812 kubeadm.go:310] 
	I1007 13:41:48.375183  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:41:48.375231  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:41:48.375369  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:41:48.375392  800812 kubeadm.go:310] 
	I1007 13:41:48.375514  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:41:48.375568  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:41:48.375617  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:41:48.375626  800812 kubeadm.go:310] 
	I1007 13:41:48.375747  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:41:48.375877  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:41:48.375895  800812 kubeadm.go:310] 
	I1007 13:41:48.376053  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:41:48.376140  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:41:48.376211  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:41:48.376288  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:41:48.376302  800812 kubeadm.go:310] 
	I1007 13:41:48.376705  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:48.376830  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:41:48.376948  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:41:48.377115  800812 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:41:48.377169  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:41:48.848117  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:48.863751  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:48.874610  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:48.874642  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:48.874715  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:48.886201  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:48.886279  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:48.897494  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:48.908398  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:48.908481  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:48.921409  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.931814  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:48.931882  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.943484  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:48.955060  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:48.955245  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:48.966391  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:49.042441  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:41:49.042521  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:49.203488  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:49.203603  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:49.203715  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:41:49.410381  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:49.412411  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:49.412520  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:49.412591  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:49.412694  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:49.412816  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:49.412940  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:49.412999  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:49.413053  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:49.413105  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:49.413196  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:49.413283  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:49.413319  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:49.413396  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:49.634922  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:49.724221  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:49.804768  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:49.980061  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:50.000515  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:50.000858  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:50.001053  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:50.163951  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:50.166163  800812 out.go:235]   - Booting up control plane ...
	I1007 13:41:50.166331  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:50.180837  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:50.181963  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:50.184140  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:50.190548  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:41:50.354360  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:53.426359  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:59.510321  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:02.578322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:08.658292  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:11.730352  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:17.810322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:20.882397  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:26.962343  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:30.192477  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:42:30.192790  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:30.193025  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:30.034345  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:35.193544  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:35.193820  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:36.114353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:39.186453  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:45.194245  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:45.194449  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:45.266293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:48.338329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:54.418332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:57.490294  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:05.194833  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:05.195103  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:03.570372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:06.642286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:09.643253  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:09.643290  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643598  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:09.643627  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:09.645347  802960 machine.go:96] duration metric: took 4m37.397836997s to provisionDockerMachine
	I1007 13:43:09.645389  802960 fix.go:56] duration metric: took 4m37.421085967s for fixHost
	I1007 13:43:09.645394  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 4m37.421104002s
	W1007 13:43:09.645409  802960 start.go:714] error starting host: provision: host is not running
	W1007 13:43:09.645530  802960 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 13:43:09.645542  802960 start.go:729] Will try again in 5 seconds ...
	I1007 13:43:14.646206  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:43:14.646330  802960 start.go:364] duration metric: took 74.211µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:43:14.646374  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:43:14.646382  802960 fix.go:54] fixHost starting: 
	I1007 13:43:14.646717  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:43:14.646746  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:43:14.662426  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I1007 13:43:14.663016  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:43:14.663790  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:43:14.663822  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:43:14.664176  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:43:14.664429  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:14.664605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:43:14.666440  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Stopped err=<nil>
	I1007 13:43:14.666467  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	W1007 13:43:14.666648  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:43:14.668507  802960 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-489319" ...
	I1007 13:43:14.669973  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Start
	I1007 13:43:14.670294  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring networks are active...
	I1007 13:43:14.671299  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network default is active
	I1007 13:43:14.671623  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network mk-default-k8s-diff-port-489319 is active
	I1007 13:43:14.672332  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Getting domain xml...
	I1007 13:43:14.673106  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Creating domain...
	I1007 13:43:15.035227  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting to get IP...
	I1007 13:43:15.036226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036673  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036768  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.036657  804186 retry.go:31] will retry after 204.852009ms: waiting for machine to come up
	I1007 13:43:15.243827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244610  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244699  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.244581  804186 retry.go:31] will retry after 334.887784ms: waiting for machine to come up
	I1007 13:43:15.581226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581717  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581747  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.581665  804186 retry.go:31] will retry after 354.992125ms: waiting for machine to come up
	I1007 13:43:15.938078  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938577  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938614  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.938518  804186 retry.go:31] will retry after 592.784389ms: waiting for machine to come up
	I1007 13:43:16.533531  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534103  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534128  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:16.534054  804186 retry.go:31] will retry after 756.034822ms: waiting for machine to come up
	I1007 13:43:17.291995  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292785  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292807  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:17.292736  804186 retry.go:31] will retry after 896.816081ms: waiting for machine to come up
	I1007 13:43:18.191016  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191527  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191560  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:18.191466  804186 retry.go:31] will retry after 1.08609499s: waiting for machine to come up
	I1007 13:43:19.280109  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280537  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280576  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:19.280520  804186 retry.go:31] will retry after 1.392221474s: waiting for machine to come up
	I1007 13:43:20.674622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675071  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675115  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:20.675031  804186 retry.go:31] will retry after 1.78021676s: waiting for machine to come up
	I1007 13:43:22.457647  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458248  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:22.458160  804186 retry.go:31] will retry after 2.117086662s: waiting for machine to come up
	I1007 13:43:24.576838  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577415  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577445  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:24.577364  804186 retry.go:31] will retry after 2.850833043s: waiting for machine to come up
	I1007 13:43:27.432222  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432855  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432882  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:27.432789  804186 retry.go:31] will retry after 3.63047619s: waiting for machine to come up
	I1007 13:43:31.065089  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.065729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Found IP for machine: 192.168.61.101
	I1007 13:43:31.065759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserving static IP address...
	I1007 13:43:31.065782  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has current primary IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.066317  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.066362  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserved static IP address: 192.168.61.101
	I1007 13:43:31.066395  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | skip adding static IP to network mk-default-k8s-diff-port-489319 - found existing host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"}
	I1007 13:43:31.066407  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for SSH to be available...
	I1007 13:43:31.066449  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Getting to WaitForSSH function...
	I1007 13:43:31.068871  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069233  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.069265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH client type: external
	I1007 13:43:31.069398  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa (-rw-------)
	I1007 13:43:31.069451  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:43:31.069466  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | About to run SSH command:
	I1007 13:43:31.069475  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | exit 0
	I1007 13:43:31.194580  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | SSH cmd err, output: <nil>: 
	I1007 13:43:31.195021  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetConfigRaw
	I1007 13:43:31.195801  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.198966  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199324  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.199359  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199635  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:43:31.199893  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:43:31.199919  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:31.200168  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.202444  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202817  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.202849  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202989  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.203185  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203352  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.203683  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.203930  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.203943  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:43:31.307182  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:43:31.307224  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307497  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:31.307525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307722  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.310462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.310835  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.310905  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.311014  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.311192  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311437  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311613  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.311794  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.311969  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.311981  802960 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489319 && echo "default-k8s-diff-port-489319" | sudo tee /etc/hostname
	I1007 13:43:31.436251  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489319
	
	I1007 13:43:31.436288  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.439927  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440241  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.440276  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440616  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.440887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441042  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441197  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.441360  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.441584  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.441612  802960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489319/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:43:31.552909  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:31.552947  802960 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:43:31.552983  802960 buildroot.go:174] setting up certificates
	I1007 13:43:31.553002  802960 provision.go:84] configureAuth start
	I1007 13:43:31.553012  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.553454  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.556642  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557015  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.557055  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.559909  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560460  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.560487  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560719  802960 provision.go:143] copyHostCerts
	I1007 13:43:31.560792  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:43:31.560812  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:43:31.560889  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:43:31.561045  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:43:31.561058  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:43:31.561084  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:43:31.561171  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:43:31.561180  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:43:31.561208  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:43:31.561271  802960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489319 san=[127.0.0.1 192.168.61.101 default-k8s-diff-port-489319 localhost minikube]
	I1007 13:43:31.871377  802960 provision.go:177] copyRemoteCerts
	I1007 13:43:31.871459  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:43:31.871489  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.874464  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.874887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.874925  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.875112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.875368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.875547  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.875675  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:31.957423  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:43:31.988554  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1007 13:43:32.018470  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:43:32.046799  802960 provision.go:87] duration metric: took 493.782862ms to configureAuth
	I1007 13:43:32.046830  802960 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:43:32.047021  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:43:32.047151  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.050313  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.050727  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.050760  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.051011  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.051216  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051385  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051522  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.051685  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.051878  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.051893  802960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:43:32.291927  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:43:32.291957  802960 machine.go:96] duration metric: took 1.092049658s to provisionDockerMachine
	I1007 13:43:32.291970  802960 start.go:293] postStartSetup for "default-k8s-diff-port-489319" (driver="kvm2")
	I1007 13:43:32.291985  802960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:43:32.292025  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.292491  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:43:32.292523  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.296195  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296625  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.296660  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296889  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.297104  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.297300  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.297479  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.377749  802960 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:43:32.382419  802960 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:43:32.382459  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:43:32.382557  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:43:32.382663  802960 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:43:32.382767  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:43:32.394059  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:32.422256  802960 start.go:296] duration metric: took 130.264438ms for postStartSetup
	I1007 13:43:32.422310  802960 fix.go:56] duration metric: took 17.775926417s for fixHost
	I1007 13:43:32.422340  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.425739  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.426254  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.426678  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426941  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.427080  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.427294  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.427305  802960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:43:32.531411  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728308612.494637714
	
	I1007 13:43:32.531442  802960 fix.go:216] guest clock: 1728308612.494637714
	I1007 13:43:32.531450  802960 fix.go:229] Guest: 2024-10-07 13:43:32.494637714 +0000 UTC Remote: 2024-10-07 13:43:32.422315329 +0000 UTC m=+300.358475670 (delta=72.322385ms)
	I1007 13:43:32.531474  802960 fix.go:200] guest clock delta is within tolerance: 72.322385ms
	I1007 13:43:32.531480  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 17.885135029s
	I1007 13:43:32.531503  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.531787  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:32.534783  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.535265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535472  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536178  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536404  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536518  802960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:43:32.536581  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.536697  802960 ssh_runner.go:195] Run: cat /version.json
	I1007 13:43:32.536729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.539709  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.539743  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540166  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540202  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540348  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540417  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540598  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540638  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540762  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.540777  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540884  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.540947  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.541089  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.642238  802960 ssh_runner.go:195] Run: systemctl --version
	I1007 13:43:32.649391  802960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:43:32.799266  802960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:43:32.805598  802960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:43:32.805707  802960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:43:32.823518  802960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:43:32.823560  802960 start.go:495] detecting cgroup driver to use...
	I1007 13:43:32.823651  802960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:43:32.842054  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:43:32.858474  802960 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:43:32.858550  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:43:32.873750  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:43:32.889165  802960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:43:33.019729  802960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:43:33.182269  802960 docker.go:233] disabling docker service ...
	I1007 13:43:33.182371  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:43:33.198610  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:43:33.213911  802960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:43:33.343594  802960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:43:33.476026  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:43:33.493130  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:43:33.513584  802960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:43:33.513652  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.525714  802960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:43:33.525816  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.538658  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.551146  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.564914  802960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:43:33.578180  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.590140  802960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.610967  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.624890  802960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:43:33.636736  802960 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:43:33.636825  802960 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:43:33.652573  802960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:43:33.665083  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:33.800780  802960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:43:33.898225  802960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:43:33.898309  802960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:43:33.903209  802960 start.go:563] Will wait 60s for crictl version
	I1007 13:43:33.903269  802960 ssh_runner.go:195] Run: which crictl
	I1007 13:43:33.907326  802960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:43:33.959008  802960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:43:33.959168  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:33.990929  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:34.023756  802960 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:43:34.025496  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:34.028784  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029327  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:34.029360  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029672  802960 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1007 13:43:34.034690  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:34.048101  802960 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:43:34.048259  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:43:34.048325  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:34.086926  802960 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:43:34.087050  802960 ssh_runner.go:195] Run: which lz4
	I1007 13:43:34.091973  802960 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:43:34.096623  802960 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:43:34.096671  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:43:35.604800  802960 crio.go:462] duration metric: took 1.512877493s to copy over tarball
	I1007 13:43:35.604892  802960 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:43:37.805292  802960 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200363211s)
	I1007 13:43:37.805327  802960 crio.go:469] duration metric: took 2.200488229s to extract the tarball
	I1007 13:43:37.805338  802960 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:43:37.845477  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:37.895532  802960 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:43:37.895562  802960 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:43:37.895574  802960 kubeadm.go:934] updating node { 192.168.61.101 8444 v1.31.1 crio true true} ...
	I1007 13:43:37.895725  802960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-489319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:43:37.895804  802960 ssh_runner.go:195] Run: crio config
	I1007 13:43:37.949367  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:37.949395  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:37.949410  802960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:43:37.949433  802960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.101 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-489319 NodeName:default-k8s-diff-port-489319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:43:37.949576  802960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.101
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-489319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:43:37.949659  802960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:43:37.959941  802960 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:43:37.960076  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:43:37.970766  802960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1007 13:43:37.989311  802960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:43:38.009634  802960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1007 13:43:38.027642  802960 ssh_runner.go:195] Run: grep 192.168.61.101	control-plane.minikube.internal$ /etc/hosts
	I1007 13:43:38.031764  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:38.044131  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:38.185253  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:43:38.212538  802960 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319 for IP: 192.168.61.101
	I1007 13:43:38.212565  802960 certs.go:194] generating shared ca certs ...
	I1007 13:43:38.212589  802960 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:43:38.212799  802960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:43:38.212859  802960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:43:38.212873  802960 certs.go:256] generating profile certs ...
	I1007 13:43:38.212997  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/client.key
	I1007 13:43:38.213082  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key.f1e25377
	I1007 13:43:38.213153  802960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key
	I1007 13:43:38.213325  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:43:38.213365  802960 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:43:38.213390  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:43:38.213425  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:43:38.213471  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:43:38.213501  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:43:38.213559  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:38.214588  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:43:38.266516  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:43:38.305985  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:43:38.353490  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:43:38.380638  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 13:43:38.424440  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:43:38.452428  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:43:38.480709  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:43:38.509639  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:43:38.536940  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:43:38.564021  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:43:38.591067  802960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:43:38.609218  802960 ssh_runner.go:195] Run: openssl version
	I1007 13:43:38.616235  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:43:38.629007  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634324  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634400  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.641330  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:43:38.654384  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:43:38.667134  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672330  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672407  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.678719  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:43:38.690565  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:43:38.705158  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710787  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710868  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.717093  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:43:38.729957  802960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:43:38.735559  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:43:38.742580  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:43:38.749684  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:43:38.756534  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:43:38.762897  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:43:38.770450  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:43:38.777701  802960 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:43:38.777813  802960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:43:38.777880  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.822678  802960 cri.go:89] found id: ""
	I1007 13:43:38.822746  802960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:43:38.833436  802960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:43:38.833463  802960 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:43:38.833516  802960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:43:38.844226  802960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:43:38.845383  802960 kubeconfig.go:125] found "default-k8s-diff-port-489319" server: "https://192.168.61.101:8444"
	I1007 13:43:38.848063  802960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:43:38.859087  802960 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.101
	I1007 13:43:38.859129  802960 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:43:38.859142  802960 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:43:38.859221  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.902955  802960 cri.go:89] found id: ""
	I1007 13:43:38.903054  802960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:43:38.920556  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:43:38.930998  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:43:38.931027  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:43:38.931095  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:43:38.940538  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:43:38.940608  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:43:38.951198  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:43:38.960653  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:43:38.960746  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:43:38.970800  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.981094  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:43:38.981176  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.991845  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:43:39.001966  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:43:39.002080  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:43:39.014014  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:43:39.026304  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:39.157169  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.098491  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.941274215s)
	I1007 13:43:41.098539  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.310925  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.402330  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.502763  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:43:41.502864  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.003197  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:45.194317  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:45.194637  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194670  800812 kubeadm.go:310] 
	I1007 13:43:45.194721  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:43:45.194779  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:43:45.194789  800812 kubeadm.go:310] 
	I1007 13:43:45.194832  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:43:45.194873  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:43:45.195053  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:43:45.195079  800812 kubeadm.go:310] 
	I1007 13:43:45.195219  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:43:45.195259  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:43:45.195300  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:43:45.195309  800812 kubeadm.go:310] 
	I1007 13:43:45.195434  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:43:45.195533  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:43:45.195542  800812 kubeadm.go:310] 
	I1007 13:43:45.195691  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:43:45.195814  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:43:45.195912  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:43:45.196007  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:43:45.196018  800812 kubeadm.go:310] 
	I1007 13:43:45.196865  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:43:45.197021  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:43:45.197130  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:43:45.197242  800812 kubeadm.go:394] duration metric: took 7m57.99434545s to StartCluster
	I1007 13:43:45.197299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:43:45.197368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:43:45.245334  800812 cri.go:89] found id: ""
	I1007 13:43:45.245369  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.245380  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:43:45.245390  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:43:45.245464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:43:45.287324  800812 cri.go:89] found id: ""
	I1007 13:43:45.287363  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.287375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:43:45.287384  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:43:45.287464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:43:45.323565  800812 cri.go:89] found id: ""
	I1007 13:43:45.323606  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.323619  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:43:45.323627  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:43:45.323708  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:43:45.365920  800812 cri.go:89] found id: ""
	I1007 13:43:45.365955  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.365967  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:43:45.365976  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:43:45.366052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:43:45.409136  800812 cri.go:89] found id: ""
	I1007 13:43:45.409177  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.409189  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:43:45.409199  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:43:45.409268  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:43:45.455631  800812 cri.go:89] found id: ""
	I1007 13:43:45.455667  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.455676  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:43:45.455683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:43:45.455746  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:43:45.512092  800812 cri.go:89] found id: ""
	I1007 13:43:45.512134  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.512146  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:43:45.512155  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:43:45.512223  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:43:45.561541  800812 cri.go:89] found id: ""
	I1007 13:43:45.561579  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.561592  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:43:45.561614  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:43:45.561635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:43:45.609728  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:43:45.609765  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:43:45.662962  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:43:45.663007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:43:45.680441  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:43:45.680496  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:43:45.768165  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:43:45.768198  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:43:45.768214  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:43:45.889172  800812 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:43:45.889245  800812 out.go:270] * 
	W1007 13:43:45.889310  800812 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.889324  800812 out.go:270] * 
	W1007 13:43:45.890214  800812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:43:45.893670  800812 out.go:201] 
	W1007 13:43:45.895121  800812 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.895161  800812 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:43:45.895184  800812 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:43:45.896672  800812 out.go:201] 
	
	
	==> CRI-O <==
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.134091177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308627134062971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1f1da06-31b6-461c-8d17-b3cb57a649d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.134752200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6faf6a47-fcc6-44d7-806c-e9080d21e042 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.134822906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6faf6a47-fcc6-44d7-806c-e9080d21e042 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.134867748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6faf6a47-fcc6-44d7-806c-e9080d21e042 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.172563816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8d3fa6c-bcd4-4fe9-82f2-d6cddc82e175 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.172669474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8d3fa6c-bcd4-4fe9-82f2-d6cddc82e175 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.174064390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6422734-2e97-43aa-83ac-e6107f655763 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.174490807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308627174468284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6422734-2e97-43aa-83ac-e6107f655763 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.175028682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b31d06ee-4591-4e0e-bdcf-77add72c75ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.175101213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b31d06ee-4591-4e0e-bdcf-77add72c75ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.175135483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b31d06ee-4591-4e0e-bdcf-77add72c75ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.218754430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09de46e3-2bfc-4069-8900-cdc33fc9cdce name=/runtime.v1.RuntimeService/Version
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.218847062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09de46e3-2bfc-4069-8900-cdc33fc9cdce name=/runtime.v1.RuntimeService/Version
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.219961292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8acbdfcd-2796-4d6a-92d0-431a741a5b00 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.220369737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308627220345138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8acbdfcd-2796-4d6a-92d0-431a741a5b00 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.220961442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=959b41e2-cd85-47ba-aabd-fe71e9f3957d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.221032171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=959b41e2-cd85-47ba-aabd-fe71e9f3957d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.221083679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=959b41e2-cd85-47ba-aabd-fe71e9f3957d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.253694085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4f905d4-9141-4c38-bfdf-49998dc8c054 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.253788680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4f905d4-9141-4c38-bfdf-49998dc8c054 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.255239803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08b92af6-bd24-48a9-ad78-d1bfde06f3e4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.255764073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308627255731847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08b92af6-bd24-48a9-ad78-d1bfde06f3e4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.256260757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92bb7e4e-86b5-4c2a-97b4-131abb65c34c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.256329998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92bb7e4e-86b5-4c2a-97b4-131abb65c34c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:43:47 old-k8s-version-120978 crio[632]: time="2024-10-07 13:43:47.256375717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92bb7e4e-86b5-4c2a-97b4-131abb65c34c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 7 13:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059927] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.123867] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.762449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.678964] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.628433] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.062444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070622] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.220328] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.150806] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.291850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +7.145908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.820671] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[Oct 7 13:36] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 13:39] systemd-fstab-generator[5058]: Ignoring "noauto" option for root device
	[Oct 7 13:41] systemd-fstab-generator[5332]: Ignoring "noauto" option for root device
	[  +0.074388] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:43:47 up 8 min,  0 users,  load average: 0.22, 0.12, 0.05
	Linux old-k8s-version-120978 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00025efc0, 0xc000cbc240, 0x1, 0x0, 0x0)
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008dda40)
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: goroutine 150 [select]:
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000bcba40, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c82cc0, 0x0, 0x0)
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008dda40)
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 07 13:43:45 old-k8s-version-120978 kubelet[5513]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 07 13:43:45 old-k8s-version-120978 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 07 13:43:45 old-k8s-version-120978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 07 13:43:46 old-k8s-version-120978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 07 13:43:46 old-k8s-version-120978 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 07 13:43:46 old-k8s-version-120978 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 07 13:43:46 old-k8s-version-120978 kubelet[5580]: I1007 13:43:46.198930    5580 server.go:416] Version: v1.20.0
	Oct 07 13:43:46 old-k8s-version-120978 kubelet[5580]: I1007 13:43:46.199276    5580 server.go:837] Client rotation is on, will bootstrap in background
	Oct 07 13:43:46 old-k8s-version-120978 kubelet[5580]: I1007 13:43:46.201785    5580 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 07 13:43:46 old-k8s-version-120978 kubelet[5580]: W1007 13:43:46.202782    5580 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 07 13:43:46 old-k8s-version-120978 kubelet[5580]: I1007 13:43:46.203138    5580 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (262.031318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-120978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (752.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-489319 --alsologtostderr -v=3
E1007 13:36:36.773724  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-489319 --alsologtostderr -v=3: exit status 82 (2m0.554139454s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-489319"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:36:00.661468  802256 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:36:00.661640  802256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:36:00.661655  802256 out.go:358] Setting ErrFile to fd 2...
	I1007 13:36:00.661662  802256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:36:00.661852  802256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:36:00.662177  802256 out.go:352] Setting JSON to false
	I1007 13:36:00.662259  802256 mustload.go:65] Loading cluster: default-k8s-diff-port-489319
	I1007 13:36:00.662618  802256 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:00.662703  802256 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:36:00.662900  802256 mustload.go:65] Loading cluster: default-k8s-diff-port-489319
	I1007 13:36:00.663042  802256 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:00.663092  802256 stop.go:39] StopHost: default-k8s-diff-port-489319
	I1007 13:36:00.663533  802256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:36:00.663588  802256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:36:00.680118  802256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I1007 13:36:00.680654  802256 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:36:00.681370  802256 main.go:141] libmachine: Using API Version  1
	I1007 13:36:00.681407  802256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:36:00.681854  802256 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:36:00.684758  802256 out.go:177] * Stopping node "default-k8s-diff-port-489319"  ...
	I1007 13:36:00.686306  802256 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 13:36:00.686347  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:36:00.686686  802256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 13:36:00.686724  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:36:00.690015  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:36:00.690456  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:36:00.690491  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:36:00.690654  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:36:00.690850  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:36:00.691030  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:36:00.691194  802256 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:36:00.800089  802256 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 13:36:00.863392  802256 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 13:36:00.932617  802256 main.go:141] libmachine: Stopping "default-k8s-diff-port-489319"...
	I1007 13:36:00.932647  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:36:00.934476  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Stop
	I1007 13:36:00.938493  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 0/120
	I1007 13:36:01.940797  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 1/120
	I1007 13:36:02.942434  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 2/120
	I1007 13:36:03.944303  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 3/120
	I1007 13:36:04.945864  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 4/120
	I1007 13:36:05.947544  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 5/120
	I1007 13:36:06.949440  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 6/120
	I1007 13:36:07.951064  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 7/120
	I1007 13:36:08.953392  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 8/120
	I1007 13:36:09.955145  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 9/120
	I1007 13:36:10.957632  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 10/120
	I1007 13:36:11.959478  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 11/120
	I1007 13:36:12.961427  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 12/120
	I1007 13:36:13.962838  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 13/120
	I1007 13:36:14.965074  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 14/120
	I1007 13:36:15.967306  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 15/120
	I1007 13:36:16.968783  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 16/120
	I1007 13:36:17.970116  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 17/120
	I1007 13:36:18.971544  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 18/120
	I1007 13:36:19.972906  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 19/120
	I1007 13:36:20.974497  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 20/120
	I1007 13:36:21.976060  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 21/120
	I1007 13:36:22.977963  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 22/120
	I1007 13:36:23.979116  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 23/120
	I1007 13:36:24.980736  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 24/120
	I1007 13:36:25.982865  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 25/120
	I1007 13:36:26.985109  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 26/120
	I1007 13:36:27.987480  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 27/120
	I1007 13:36:28.989036  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 28/120
	I1007 13:36:29.990567  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 29/120
	I1007 13:36:30.992261  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 30/120
	I1007 13:36:31.993606  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 31/120
	I1007 13:36:32.995028  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 32/120
	I1007 13:36:33.996781  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 33/120
	I1007 13:36:34.998446  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 34/120
	I1007 13:36:36.000585  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 35/120
	I1007 13:36:37.002992  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 36/120
	I1007 13:36:38.005017  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 37/120
	I1007 13:36:39.006545  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 38/120
	I1007 13:36:40.007998  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 39/120
	I1007 13:36:41.009456  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 40/120
	I1007 13:36:42.011108  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 41/120
	I1007 13:36:43.012688  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 42/120
	I1007 13:36:44.014312  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 43/120
	I1007 13:36:45.016854  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 44/120
	I1007 13:36:46.019196  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 45/120
	I1007 13:36:47.020744  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 46/120
	I1007 13:36:48.022153  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 47/120
	I1007 13:36:49.023392  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 48/120
	I1007 13:36:50.025240  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 49/120
	I1007 13:36:51.027656  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 50/120
	I1007 13:36:52.029081  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 51/120
	I1007 13:36:53.030618  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 52/120
	I1007 13:36:54.031840  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 53/120
	I1007 13:36:55.033302  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 54/120
	I1007 13:36:56.035394  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 55/120
	I1007 13:36:57.036642  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 56/120
	I1007 13:36:58.038347  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 57/120
	I1007 13:36:59.039568  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 58/120
	I1007 13:37:00.040774  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 59/120
	I1007 13:37:01.042272  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 60/120
	I1007 13:37:02.044281  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 61/120
	I1007 13:37:03.045645  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 62/120
	I1007 13:37:04.047533  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 63/120
	I1007 13:37:05.048862  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 64/120
	I1007 13:37:06.051394  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 65/120
	I1007 13:37:07.053105  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 66/120
	I1007 13:37:08.054524  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 67/120
	I1007 13:37:09.056684  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 68/120
	I1007 13:37:10.059271  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 69/120
	I1007 13:37:11.061619  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 70/120
	I1007 13:37:12.063171  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 71/120
	I1007 13:37:13.064504  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 72/120
	I1007 13:37:14.066154  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 73/120
	I1007 13:37:15.068382  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 74/120
	I1007 13:37:16.070790  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 75/120
	I1007 13:37:17.073225  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 76/120
	I1007 13:37:18.074672  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 77/120
	I1007 13:37:19.076150  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 78/120
	I1007 13:37:20.077574  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 79/120
	I1007 13:37:21.079355  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 80/120
	I1007 13:37:22.080640  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 81/120
	I1007 13:37:23.081869  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 82/120
	I1007 13:37:24.083497  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 83/120
	I1007 13:37:25.084876  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 84/120
	I1007 13:37:26.087306  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 85/120
	I1007 13:37:27.088760  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 86/120
	I1007 13:37:28.090226  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 87/120
	I1007 13:37:29.091545  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 88/120
	I1007 13:37:30.093078  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 89/120
	I1007 13:37:31.095150  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 90/120
	I1007 13:37:32.097461  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 91/120
	I1007 13:37:33.099330  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 92/120
	I1007 13:37:34.100914  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 93/120
	I1007 13:37:35.102442  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 94/120
	I1007 13:37:36.104049  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 95/120
	I1007 13:37:37.105396  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 96/120
	I1007 13:37:38.107137  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 97/120
	I1007 13:37:39.109062  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 98/120
	I1007 13:37:40.110515  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 99/120
	I1007 13:37:41.112902  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 100/120
	I1007 13:37:42.114598  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 101/120
	I1007 13:37:43.116360  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 102/120
	I1007 13:37:44.117865  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 103/120
	I1007 13:37:45.119502  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 104/120
	I1007 13:37:46.121732  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 105/120
	I1007 13:37:47.123231  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 106/120
	I1007 13:37:48.124647  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 107/120
	I1007 13:37:49.126118  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 108/120
	I1007 13:37:50.127583  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 109/120
	I1007 13:37:51.129336  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 110/120
	I1007 13:37:52.131282  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 111/120
	I1007 13:37:53.132752  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 112/120
	I1007 13:37:54.135356  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 113/120
	I1007 13:37:55.137325  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 114/120
	I1007 13:37:56.139454  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 115/120
	I1007 13:37:57.140915  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 116/120
	I1007 13:37:58.142140  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 117/120
	I1007 13:37:59.143842  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 118/120
	I1007 13:38:00.145567  802256 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for machine to stop 119/120
	I1007 13:38:01.147080  802256 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 13:38:01.147205  802256 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 13:38:01.149527  802256 out.go:201] 
	W1007 13:38:01.151106  802256 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 13:38:01.151132  802256 out.go:270] * 
	* 
	W1007 13:38:01.155408  802256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:38:01.156957  802256 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-489319 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319: exit status 3 (18.474735862s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:38:19.634491  802753 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host
	E1007 13:38:19.634512  802753 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-489319" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319: exit status 3 (3.199836255s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:38:22.834415  802833 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host
	E1007 13:38:22.834440  802833 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-489319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-489319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155524005s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-489319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319: exit status 3 (3.064278641s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:38:32.054438  802913 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host
	E1007 13:38:32.054461  802913 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.101:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-489319" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1007 13:40:13.698694  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:41:16.522264  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-653322 -n embed-certs-653322
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-07 13:49:12.38114815 +0000 UTC m=+6085.579688135
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-653322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-653322 logs -n 25: (1.511016479s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:26 UTC |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-016701             | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-653322            | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-120978        | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC | 07 Oct 24 13:48 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:38:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:38:32.108474  802960 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:38:32.108648  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108659  802960 out.go:358] Setting ErrFile to fd 2...
	I1007 13:38:32.108665  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108864  802960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:38:32.109477  802960 out.go:352] Setting JSON to false
	I1007 13:38:32.110672  802960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12061,"bootTime":1728296251,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:38:32.110773  802960 start.go:139] virtualization: kvm guest
	I1007 13:38:32.113566  802960 out.go:177] * [default-k8s-diff-port-489319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:38:32.115580  802960 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:38:32.115627  802960 notify.go:220] Checking for updates...
	I1007 13:38:32.118464  802960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:38:32.120173  802960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:38:32.121799  802960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:38:32.123382  802960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:38:32.125020  802960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:38:29.209336  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:31.212514  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:32.126861  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:38:32.127255  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.127337  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.143671  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1007 13:38:32.144158  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.144820  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.144844  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.145206  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.145416  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.145655  802960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:38:32.146010  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.146112  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.161508  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1007 13:38:32.162082  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.162517  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.162541  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.162886  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.163112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.200281  802960 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:38:32.201380  802960 start.go:297] selected driver: kvm2
	I1007 13:38:32.201393  802960 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.201499  802960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:38:32.202260  802960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.202353  802960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:38:32.218742  802960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:38:32.219129  802960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:38:32.219168  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:38:32.219221  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:38:32.219254  802960 start.go:340] cluster config:
	{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.219380  802960 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.222273  802960 out.go:177] * Starting "default-k8s-diff-port-489319" primary control-plane node in "default-k8s-diff-port-489319" cluster
	I1007 13:38:32.223750  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:38:32.223801  802960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:38:32.223816  802960 cache.go:56] Caching tarball of preloaded images
	I1007 13:38:32.223891  802960 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:38:32.223901  802960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:38:32.223997  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:38:32.224208  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:38:32.224280  802960 start.go:364] duration metric: took 38.73µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:38:32.224297  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:38:32.224303  802960 fix.go:54] fixHost starting: 
	I1007 13:38:32.224637  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.224674  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.239368  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I1007 13:38:32.239869  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.240386  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.240409  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.240813  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.241063  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.241228  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:38:32.243196  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Running err=<nil>
	W1007 13:38:32.243217  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:38:32.245881  802960 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-489319" VM ...
	I1007 13:38:30.514797  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:33.014487  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:30.891736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:30.891810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:30.926900  800812 cri.go:89] found id: ""
	I1007 13:38:30.926934  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.926945  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:30.926953  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:30.927020  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:30.962704  800812 cri.go:89] found id: ""
	I1007 13:38:30.962742  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.962760  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:30.962769  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:30.962839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:31.000947  800812 cri.go:89] found id: ""
	I1007 13:38:31.000986  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.000999  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:31.001009  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:31.001079  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:31.040687  800812 cri.go:89] found id: ""
	I1007 13:38:31.040734  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.040743  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:31.040750  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:31.040808  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:31.077841  800812 cri.go:89] found id: ""
	I1007 13:38:31.077872  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.077891  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:31.077900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:31.077975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:31.128590  800812 cri.go:89] found id: ""
	I1007 13:38:31.128625  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.128638  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:31.128736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:31.128947  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:31.170110  800812 cri.go:89] found id: ""
	I1007 13:38:31.170140  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.170149  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:31.170157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:31.170231  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:31.229262  800812 cri.go:89] found id: ""
	I1007 13:38:31.229297  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.229310  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:31.229327  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:31.229343  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:31.281680  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:31.281727  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:31.296076  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:31.296111  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:31.367443  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:31.367468  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:31.367488  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:31.449882  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:31.449933  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:33.993958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:34.007064  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:34.007150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:34.043479  800812 cri.go:89] found id: ""
	I1007 13:38:34.043517  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.043529  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:34.043537  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:34.043609  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:34.080953  800812 cri.go:89] found id: ""
	I1007 13:38:34.081006  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.081019  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:34.081028  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:34.081100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:34.117708  800812 cri.go:89] found id: ""
	I1007 13:38:34.117741  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.117749  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:34.117756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:34.117823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:34.154457  800812 cri.go:89] found id: ""
	I1007 13:38:34.154487  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.154499  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:34.154507  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:34.154586  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:34.192037  800812 cri.go:89] found id: ""
	I1007 13:38:34.192070  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.192080  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:34.192088  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:34.192159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:34.230404  800812 cri.go:89] found id: ""
	I1007 13:38:34.230441  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.230453  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:34.230461  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:34.230529  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:34.266650  800812 cri.go:89] found id: ""
	I1007 13:38:34.266712  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.266726  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:34.266736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:34.266832  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:34.302731  800812 cri.go:89] found id: ""
	I1007 13:38:34.302767  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.302784  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:34.302807  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:34.302828  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:34.377367  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:34.377400  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:34.377417  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:34.453185  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:34.453232  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:34.498235  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:34.498269  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:34.548177  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:34.548224  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:32.247486  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:38:32.247524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.247949  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:38:32.250961  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:38:32.251539  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251823  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:38:32.252088  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252375  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:38:32.252944  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:38:32.253182  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:38:32.253197  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:38:35.122367  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:33.709093  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.709691  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.514611  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:38.014557  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:37.065875  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:37.079772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:37.079868  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:37.115654  800812 cri.go:89] found id: ""
	I1007 13:38:37.115685  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.115696  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:37.115709  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:37.115777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:37.156963  800812 cri.go:89] found id: ""
	I1007 13:38:37.157001  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.157013  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:37.157022  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:37.157080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:37.199210  800812 cri.go:89] found id: ""
	I1007 13:38:37.199243  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.199254  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:37.199263  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:37.199336  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:37.240823  800812 cri.go:89] found id: ""
	I1007 13:38:37.240868  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.240880  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:37.240889  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:37.240958  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:37.289164  800812 cri.go:89] found id: ""
	I1007 13:38:37.289192  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.289202  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:37.289210  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:37.289276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:37.330630  800812 cri.go:89] found id: ""
	I1007 13:38:37.330660  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.330669  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:37.330675  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:37.330731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:37.372401  800812 cri.go:89] found id: ""
	I1007 13:38:37.372431  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.372439  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:37.372446  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:37.372500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:37.413585  800812 cri.go:89] found id: ""
	I1007 13:38:37.413617  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.413625  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:37.413634  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:37.413646  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:37.458433  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:37.458471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:37.512720  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:37.512769  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.527774  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:37.527813  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:37.605381  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:37.605408  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:37.605422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.182809  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:40.196597  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:40.196671  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:40.236687  800812 cri.go:89] found id: ""
	I1007 13:38:40.236726  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.236738  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:40.236746  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:40.236814  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:40.271432  800812 cri.go:89] found id: ""
	I1007 13:38:40.271470  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.271479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:40.271485  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:40.271548  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:40.308972  800812 cri.go:89] found id: ""
	I1007 13:38:40.309014  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.309026  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:40.309044  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:40.309115  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:40.345363  800812 cri.go:89] found id: ""
	I1007 13:38:40.345404  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.345415  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:40.345424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:40.345506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:40.378426  800812 cri.go:89] found id: ""
	I1007 13:38:40.378457  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.378465  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:40.378471  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:40.378525  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:40.415312  800812 cri.go:89] found id: ""
	I1007 13:38:40.415349  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.415370  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:40.415379  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:40.415448  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:40.452679  800812 cri.go:89] found id: ""
	I1007 13:38:40.452715  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.452727  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:40.452735  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:40.452810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:40.490328  800812 cri.go:89] found id: ""
	I1007 13:38:40.490362  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.490371  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:40.490382  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:40.490395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.581489  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:40.581551  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:40.626827  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:40.626865  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:40.680180  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:40.680226  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:40.696284  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:40.696316  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:40.777722  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:38.198306  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:37.710573  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.210415  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.516522  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.013328  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.278317  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:43.292099  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:43.292180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:43.329487  800812 cri.go:89] found id: ""
	I1007 13:38:43.329518  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.329527  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:43.329534  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:43.329593  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:43.367622  800812 cri.go:89] found id: ""
	I1007 13:38:43.367653  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.367665  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:43.367674  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:43.367750  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:43.403439  800812 cri.go:89] found id: ""
	I1007 13:38:43.403477  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.403491  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:43.403499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:43.403577  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:43.442974  800812 cri.go:89] found id: ""
	I1007 13:38:43.443019  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.443029  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:43.443037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:43.443102  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:43.479975  800812 cri.go:89] found id: ""
	I1007 13:38:43.480005  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.480013  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:43.480020  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:43.480091  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:43.521645  800812 cri.go:89] found id: ""
	I1007 13:38:43.521679  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.521695  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:43.521704  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:43.521763  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:43.558574  800812 cri.go:89] found id: ""
	I1007 13:38:43.558605  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.558614  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:43.558620  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:43.558687  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:43.594054  800812 cri.go:89] found id: ""
	I1007 13:38:43.594086  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.594097  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:43.594111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:43.594128  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:43.673587  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:43.673634  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:43.717642  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:43.717673  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:43.771524  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:43.771586  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:43.786726  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:43.786764  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:43.858645  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:44.274468  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:42.709396  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:44.709744  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.711052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:45.015094  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:47.513659  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:49.515994  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.359453  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:46.373401  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:46.373490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:46.414387  800812 cri.go:89] found id: ""
	I1007 13:38:46.414416  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.414425  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:46.414432  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:46.414498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:46.451704  800812 cri.go:89] found id: ""
	I1007 13:38:46.451739  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.451751  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:46.451761  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:46.451822  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:46.487607  800812 cri.go:89] found id: ""
	I1007 13:38:46.487646  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.487657  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:46.487666  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:46.487747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:46.527080  800812 cri.go:89] found id: ""
	I1007 13:38:46.527113  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.527121  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:46.527128  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:46.527182  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:46.565979  800812 cri.go:89] found id: ""
	I1007 13:38:46.566007  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.566016  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:46.566037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:46.566095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:46.604631  800812 cri.go:89] found id: ""
	I1007 13:38:46.604665  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.604674  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:46.604683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:46.604751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:46.643618  800812 cri.go:89] found id: ""
	I1007 13:38:46.643649  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.643660  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:46.643669  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:46.643741  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:46.686777  800812 cri.go:89] found id: ""
	I1007 13:38:46.686812  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.686824  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:46.686837  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:46.686853  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:46.769689  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:46.769749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:46.810903  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:46.810934  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:46.859958  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:46.860007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:46.874867  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:46.874902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:46.945267  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.446436  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:49.460403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:49.460493  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:49.498234  800812 cri.go:89] found id: ""
	I1007 13:38:49.498278  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.498290  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:49.498302  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:49.498376  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:49.539337  800812 cri.go:89] found id: ""
	I1007 13:38:49.539374  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.539386  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:49.539395  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:49.539465  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:49.580365  800812 cri.go:89] found id: ""
	I1007 13:38:49.580404  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.580415  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:49.580424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:49.580498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:49.624591  800812 cri.go:89] found id: ""
	I1007 13:38:49.624627  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.624638  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:49.624652  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:49.624726  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:49.661718  800812 cri.go:89] found id: ""
	I1007 13:38:49.661750  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.661762  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:49.661776  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:49.661846  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:49.698356  800812 cri.go:89] found id: ""
	I1007 13:38:49.698389  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.698402  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:49.698410  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:49.698477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:49.735453  800812 cri.go:89] found id: ""
	I1007 13:38:49.735486  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.735497  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:49.735505  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:49.735578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:49.779530  800812 cri.go:89] found id: ""
	I1007 13:38:49.779558  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.779567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:49.779577  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:49.779593  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:49.794020  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:49.794067  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:49.868060  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.868093  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:49.868110  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:49.946554  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:49.946599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:49.990212  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:49.990251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:47.346399  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:49.208303  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:51.209295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.013939  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:54.514863  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.543303  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:52.559466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:52.559535  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:52.601977  800812 cri.go:89] found id: ""
	I1007 13:38:52.602008  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.602018  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:52.602043  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:52.602104  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:52.640954  800812 cri.go:89] found id: ""
	I1007 13:38:52.640985  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.641005  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:52.641012  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:52.641067  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:52.682075  800812 cri.go:89] found id: ""
	I1007 13:38:52.682105  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.682113  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:52.682119  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:52.682184  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:52.722957  800812 cri.go:89] found id: ""
	I1007 13:38:52.722986  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.722994  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:52.723006  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:52.723062  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:52.764074  800812 cri.go:89] found id: ""
	I1007 13:38:52.764110  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.764122  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:52.764131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:52.764210  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:52.805802  800812 cri.go:89] found id: ""
	I1007 13:38:52.805830  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.805838  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:52.805844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:52.805912  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:52.846116  800812 cri.go:89] found id: ""
	I1007 13:38:52.846148  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.846157  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:52.846164  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:52.846226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:52.888666  800812 cri.go:89] found id: ""
	I1007 13:38:52.888703  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.888719  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:52.888733  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:52.888750  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:52.968131  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:52.968177  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:53.012585  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:53.012624  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:53.066638  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:53.066692  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:53.081227  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:53.081264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:53.156955  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:55.657820  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:55.672261  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:55.672349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:55.713096  800812 cri.go:89] found id: ""
	I1007 13:38:55.713124  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.713135  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:55.713143  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:55.713211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:55.748413  800812 cri.go:89] found id: ""
	I1007 13:38:55.748447  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.748457  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:55.748465  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:55.748534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:55.781376  800812 cri.go:89] found id: ""
	I1007 13:38:55.781412  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.781424  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:55.781433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:55.781502  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:55.817653  800812 cri.go:89] found id: ""
	I1007 13:38:55.817681  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.817690  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:55.817697  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:55.817767  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:55.853133  800812 cri.go:89] found id: ""
	I1007 13:38:55.853166  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.853177  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:55.853185  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:55.853255  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:53.426353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:56.498332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:53.709052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.710245  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:57.014521  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:59.020215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.891659  800812 cri.go:89] found id: ""
	I1007 13:38:55.891691  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.891720  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:55.891730  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:55.891794  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:55.929345  800812 cri.go:89] found id: ""
	I1007 13:38:55.929373  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.929381  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:55.929388  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:55.929461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:55.963379  800812 cri.go:89] found id: ""
	I1007 13:38:55.963410  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.963419  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:55.963428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:55.963444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:56.006795  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:56.006837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:56.060896  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:56.060942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:56.076353  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:56.076394  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:56.157464  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:56.157492  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:56.157510  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.747936  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:58.761415  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:58.761489  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:58.795181  800812 cri.go:89] found id: ""
	I1007 13:38:58.795216  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.795226  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:58.795232  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:58.795291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:58.828749  800812 cri.go:89] found id: ""
	I1007 13:38:58.828785  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.828795  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:58.828802  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:58.828865  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:58.867195  800812 cri.go:89] found id: ""
	I1007 13:38:58.867234  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.867243  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:58.867251  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:58.867311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:58.905348  800812 cri.go:89] found id: ""
	I1007 13:38:58.905387  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.905398  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:58.905407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:58.905477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:58.940553  800812 cri.go:89] found id: ""
	I1007 13:38:58.940626  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.940655  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:58.940667  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:58.940751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:58.976595  800812 cri.go:89] found id: ""
	I1007 13:38:58.976643  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.976652  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:58.976662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:58.976719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:59.014478  800812 cri.go:89] found id: ""
	I1007 13:38:59.014512  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.014521  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:59.014527  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:59.014594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:59.051337  800812 cri.go:89] found id: ""
	I1007 13:38:59.051367  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.051378  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:59.051391  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:59.051408  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:59.091689  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:59.091733  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:59.144431  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:59.144477  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:59.159436  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:59.159471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:59.256248  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:59.256277  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:59.256293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.208916  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:00.210007  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.514807  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:04.015032  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.846247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:01.861309  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:01.861389  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:01.898079  800812 cri.go:89] found id: ""
	I1007 13:39:01.898117  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.898129  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:01.898138  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:01.898211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:01.933905  800812 cri.go:89] found id: ""
	I1007 13:39:01.933940  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.933951  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:01.933960  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:01.934056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:01.970522  800812 cri.go:89] found id: ""
	I1007 13:39:01.970552  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.970563  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:01.970580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:01.970653  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:02.014210  800812 cri.go:89] found id: ""
	I1007 13:39:02.014245  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.014257  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:02.014265  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:02.014329  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:02.052014  800812 cri.go:89] found id: ""
	I1007 13:39:02.052053  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.052065  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:02.052073  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:02.052144  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:02.089966  800812 cri.go:89] found id: ""
	I1007 13:39:02.089998  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.090007  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:02.090014  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:02.090105  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:02.125933  800812 cri.go:89] found id: ""
	I1007 13:39:02.125970  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.125982  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:02.125991  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:02.126092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:02.163348  800812 cri.go:89] found id: ""
	I1007 13:39:02.163381  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.163394  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:02.163405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:02.163422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:02.218311  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:02.218351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:02.233345  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:02.233381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:02.308402  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:02.308425  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:02.308444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:02.387161  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:02.387207  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:04.931535  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:04.954002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:04.954100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:04.994745  800812 cri.go:89] found id: ""
	I1007 13:39:04.994783  800812 logs.go:282] 0 containers: []
	W1007 13:39:04.994795  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:04.994803  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:04.994903  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:05.031041  800812 cri.go:89] found id: ""
	I1007 13:39:05.031070  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.031078  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:05.031085  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:05.031157  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:05.075737  800812 cri.go:89] found id: ""
	I1007 13:39:05.075780  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.075788  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:05.075794  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:05.075849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:05.108984  800812 cri.go:89] found id: ""
	I1007 13:39:05.109019  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.109030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:05.109038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:05.109096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:05.145667  800812 cri.go:89] found id: ""
	I1007 13:39:05.145699  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.145707  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:05.145724  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:05.145780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:05.182742  800812 cri.go:89] found id: ""
	I1007 13:39:05.182772  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.182783  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:05.182791  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:05.182859  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:05.223674  800812 cri.go:89] found id: ""
	I1007 13:39:05.223721  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.223731  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:05.223737  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:05.223802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:05.263516  800812 cri.go:89] found id: ""
	I1007 13:39:05.263555  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.263567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:05.263581  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:05.263599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:05.345447  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:05.345493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:05.386599  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:05.386635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:05.439367  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:05.439410  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:05.455636  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:05.455671  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:05.541166  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:05.618355  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:02.709614  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:05.211295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:06.514215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.515091  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.041406  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:08.056425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:08.056514  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:08.094066  800812 cri.go:89] found id: ""
	I1007 13:39:08.094098  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.094106  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:08.094113  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:08.094180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:08.136241  800812 cri.go:89] found id: ""
	I1007 13:39:08.136277  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.136289  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:08.136297  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:08.136368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:08.176917  800812 cri.go:89] found id: ""
	I1007 13:39:08.176949  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.176958  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:08.176964  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:08.177019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:08.215278  800812 cri.go:89] found id: ""
	I1007 13:39:08.215313  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.215324  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:08.215331  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:08.215386  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:08.256965  800812 cri.go:89] found id: ""
	I1007 13:39:08.257002  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.257014  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:08.257023  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:08.257093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:08.294680  800812 cri.go:89] found id: ""
	I1007 13:39:08.294716  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.294726  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:08.294736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:08.294792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:08.332832  800812 cri.go:89] found id: ""
	I1007 13:39:08.332862  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.332871  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:08.332878  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:08.332931  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:08.369893  800812 cri.go:89] found id: ""
	I1007 13:39:08.369927  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.369939  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:08.369960  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:08.369987  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:08.448286  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:08.448337  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:08.493839  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:08.493877  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:08.549319  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:08.549365  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:08.564175  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:08.564211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:08.636651  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:08.690293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:07.709699  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:10.208983  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.014066  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:13.014936  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.137682  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:11.152844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:11.152934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:11.187265  800812 cri.go:89] found id: ""
	I1007 13:39:11.187301  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.187313  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:11.187322  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:11.187384  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:11.222721  800812 cri.go:89] found id: ""
	I1007 13:39:11.222760  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.222776  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:11.222783  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:11.222842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:11.261731  800812 cri.go:89] found id: ""
	I1007 13:39:11.261765  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.261774  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:11.261781  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:11.261841  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:11.299511  800812 cri.go:89] found id: ""
	I1007 13:39:11.299541  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.299556  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:11.299563  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:11.299615  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:11.338737  800812 cri.go:89] found id: ""
	I1007 13:39:11.338776  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.338787  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:11.338793  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:11.338851  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:11.382231  800812 cri.go:89] found id: ""
	I1007 13:39:11.382267  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.382277  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:11.382284  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:11.382344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:11.436147  800812 cri.go:89] found id: ""
	I1007 13:39:11.436179  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.436188  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:11.436195  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:11.436258  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:11.477332  800812 cri.go:89] found id: ""
	I1007 13:39:11.477367  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.477380  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:11.477392  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:11.477411  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:11.531842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:11.531887  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:11.546074  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:11.546103  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:11.617435  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.617455  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:11.617470  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:11.703173  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:11.703227  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.249507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:14.263655  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:14.263740  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:14.300339  800812 cri.go:89] found id: ""
	I1007 13:39:14.300372  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.300381  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:14.300388  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:14.300441  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:14.338791  800812 cri.go:89] found id: ""
	I1007 13:39:14.338836  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.338849  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:14.338873  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:14.338960  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:14.376537  800812 cri.go:89] found id: ""
	I1007 13:39:14.376570  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.376582  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:14.376590  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:14.376648  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:14.411933  800812 cri.go:89] found id: ""
	I1007 13:39:14.411969  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.411981  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:14.411990  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:14.412057  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:14.449007  800812 cri.go:89] found id: ""
	I1007 13:39:14.449049  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.449060  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:14.449069  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:14.449129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:14.489459  800812 cri.go:89] found id: ""
	I1007 13:39:14.489495  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.489507  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:14.489516  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:14.489575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:14.529717  800812 cri.go:89] found id: ""
	I1007 13:39:14.529747  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.529756  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:14.529765  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:14.529820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:14.566093  800812 cri.go:89] found id: ""
	I1007 13:39:14.566122  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.566129  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:14.566139  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:14.566156  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:14.640009  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:14.640037  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:14.640051  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:14.726151  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:14.726201  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.771158  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:14.771195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:14.824599  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:14.824644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:14.774418  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:12.209697  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:14.710276  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:15.514317  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.514843  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.339940  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:17.361437  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:17.361511  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:17.402518  800812 cri.go:89] found id: ""
	I1007 13:39:17.402555  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.402566  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:17.402575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:17.402645  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:17.454422  800812 cri.go:89] found id: ""
	I1007 13:39:17.454460  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.454472  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:17.454480  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:17.454552  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:17.497017  800812 cri.go:89] found id: ""
	I1007 13:39:17.497049  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.497060  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:17.497070  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:17.497142  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:17.534352  800812 cri.go:89] found id: ""
	I1007 13:39:17.534389  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.534399  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:17.534406  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:17.534461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:17.568185  800812 cri.go:89] found id: ""
	I1007 13:39:17.568216  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.568225  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:17.568232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:17.568291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:17.611138  800812 cri.go:89] found id: ""
	I1007 13:39:17.611171  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.611182  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:17.611191  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:17.611260  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:17.649494  800812 cri.go:89] found id: ""
	I1007 13:39:17.649527  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.649536  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:17.649544  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:17.649604  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:17.690104  800812 cri.go:89] found id: ""
	I1007 13:39:17.690140  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.690153  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:17.690166  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:17.690183  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:17.763419  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:17.763450  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:17.763467  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:17.841000  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:17.841050  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:17.879832  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:17.879862  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:17.932754  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:17.932796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.447864  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:20.462219  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:20.462301  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:20.499833  800812 cri.go:89] found id: ""
	I1007 13:39:20.499870  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.499881  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:20.499889  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:20.499990  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:20.538996  800812 cri.go:89] found id: ""
	I1007 13:39:20.539031  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.539043  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:20.539051  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:20.539132  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:20.575341  800812 cri.go:89] found id: ""
	I1007 13:39:20.575379  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.575391  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:20.575400  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:20.575470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:20.613527  800812 cri.go:89] found id: ""
	I1007 13:39:20.613562  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.613572  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:20.613582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:20.613657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:20.650651  800812 cri.go:89] found id: ""
	I1007 13:39:20.650686  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.650699  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:20.650709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:20.650769  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:20.689122  800812 cri.go:89] found id: ""
	I1007 13:39:20.689151  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.689160  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:20.689166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:20.689220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:20.725242  800812 cri.go:89] found id: ""
	I1007 13:39:20.725275  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.725284  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:20.725290  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:20.725348  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:20.759949  800812 cri.go:89] found id: ""
	I1007 13:39:20.759988  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.760000  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:20.760014  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:20.760028  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:20.804886  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:20.804922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:20.857652  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:20.857700  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.872182  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:20.872215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:39:17.842234  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:17.210309  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:19.210449  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:21.709672  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:20.014047  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:22.014646  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:24.015649  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	W1007 13:39:20.945413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:20.945439  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:20.945455  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:23.521232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:23.537035  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:23.537116  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:23.580100  800812 cri.go:89] found id: ""
	I1007 13:39:23.580141  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.580154  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:23.580162  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:23.580220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:23.622271  800812 cri.go:89] found id: ""
	I1007 13:39:23.622302  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.622313  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:23.622321  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:23.622390  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:23.658290  800812 cri.go:89] found id: ""
	I1007 13:39:23.658320  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.658335  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:23.658341  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:23.658398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:23.696510  800812 cri.go:89] found id: ""
	I1007 13:39:23.696543  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.696555  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:23.696564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:23.696624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:23.732913  800812 cri.go:89] found id: ""
	I1007 13:39:23.732947  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.732967  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:23.732974  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:23.733027  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:23.774502  800812 cri.go:89] found id: ""
	I1007 13:39:23.774540  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.774550  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:23.774557  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:23.774710  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:23.821217  800812 cri.go:89] found id: ""
	I1007 13:39:23.821258  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.821269  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:23.821278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:23.821350  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:23.864327  800812 cri.go:89] found id: ""
	I1007 13:39:23.864361  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.864373  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:23.864386  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:23.864404  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:23.918454  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:23.918505  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:23.933324  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:23.933363  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:24.015858  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:24.015879  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:24.015892  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:24.096557  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:24.096609  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:23.926328  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:26.994313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:24.203346  800212 pod_ready.go:82] duration metric: took 4m0.00074454s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" ...
	E1007 13:39:24.203420  800212 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:39:24.203447  800212 pod_ready.go:39] duration metric: took 4m15.010484686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:39:24.203483  800212 kubeadm.go:597] duration metric: took 4m22.534978235s to restartPrimaryControlPlane
	W1007 13:39:24.203568  800212 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:24.203597  800212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:26.018248  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:28.513858  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:26.638856  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:26.654921  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:26.654989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:26.693714  800812 cri.go:89] found id: ""
	I1007 13:39:26.693747  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.693756  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:26.693764  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:26.693819  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:26.732730  800812 cri.go:89] found id: ""
	I1007 13:39:26.732762  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.732771  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:26.732778  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:26.732837  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:26.774239  800812 cri.go:89] found id: ""
	I1007 13:39:26.774272  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.774281  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:26.774288  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:26.774352  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:26.812547  800812 cri.go:89] found id: ""
	I1007 13:39:26.812597  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.812609  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:26.812619  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:26.812676  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:26.849472  800812 cri.go:89] found id: ""
	I1007 13:39:26.849501  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.849509  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:26.849515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:26.849572  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:26.885935  800812 cri.go:89] found id: ""
	I1007 13:39:26.885965  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.885974  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:26.885981  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:26.886052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:26.920629  800812 cri.go:89] found id: ""
	I1007 13:39:26.920659  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.920668  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:26.920674  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:26.920731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:26.959016  800812 cri.go:89] found id: ""
	I1007 13:39:26.959052  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.959065  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:26.959079  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:26.959095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:27.012308  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:27.012351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:27.027559  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:27.027602  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:27.111043  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:27.111070  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:27.111086  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:27.194428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:27.194476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:29.738163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:29.752869  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:29.752959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:29.791071  800812 cri.go:89] found id: ""
	I1007 13:39:29.791102  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.791111  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:29.791128  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:29.791206  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:29.837148  800812 cri.go:89] found id: ""
	I1007 13:39:29.837194  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.837207  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:29.837217  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:29.837291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:29.874334  800812 cri.go:89] found id: ""
	I1007 13:39:29.874371  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.874383  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:29.874391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:29.874463  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:29.915799  800812 cri.go:89] found id: ""
	I1007 13:39:29.915835  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.915852  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:29.915861  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:29.915923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:29.954557  800812 cri.go:89] found id: ""
	I1007 13:39:29.954589  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.954598  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:29.954604  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:29.954661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:29.990873  800812 cri.go:89] found id: ""
	I1007 13:39:29.990912  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.990925  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:29.990934  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:29.991019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:30.031687  800812 cri.go:89] found id: ""
	I1007 13:39:30.031738  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.031751  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:30.031763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:30.031872  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:30.071524  800812 cri.go:89] found id: ""
	I1007 13:39:30.071565  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.071579  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:30.071594  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:30.071614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:30.085558  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:30.085591  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:30.162897  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:30.162922  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:30.162935  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:30.244979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:30.245029  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:30.285065  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:30.285098  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:30.513894  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:33.013867  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:32.838701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:32.852755  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:32.852839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:32.890012  800812 cri.go:89] found id: ""
	I1007 13:39:32.890067  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.890079  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:32.890088  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:32.890156  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:32.928467  800812 cri.go:89] found id: ""
	I1007 13:39:32.928499  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.928508  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:32.928517  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:32.928578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:32.964908  800812 cri.go:89] found id: ""
	I1007 13:39:32.964944  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.964956  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:32.964965  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:32.965096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:32.999714  800812 cri.go:89] found id: ""
	I1007 13:39:32.999747  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.999773  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:32.999782  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:32.999849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:33.037889  800812 cri.go:89] found id: ""
	I1007 13:39:33.037924  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.037934  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:33.037946  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:33.038015  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:33.076192  800812 cri.go:89] found id: ""
	I1007 13:39:33.076226  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.076234  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:33.076241  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:33.076311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:33.112402  800812 cri.go:89] found id: ""
	I1007 13:39:33.112442  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.112455  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:33.112463  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:33.112527  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:33.151872  800812 cri.go:89] found id: ""
	I1007 13:39:33.151905  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.151916  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:33.151927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:33.151942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:33.203529  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:33.203572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:33.220050  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:33.220097  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:33.304000  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:33.304030  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:33.304046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:33.383979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:33.384038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:33.074393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:36.146280  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:35.015200  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:37.514925  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:35.929247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:35.943624  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:35.943691  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:35.980943  800812 cri.go:89] found id: ""
	I1007 13:39:35.980973  800812 logs.go:282] 0 containers: []
	W1007 13:39:35.980988  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:35.980996  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:35.981068  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:36.021834  800812 cri.go:89] found id: ""
	I1007 13:39:36.021868  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.021876  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:36.021882  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:36.021939  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:36.056651  800812 cri.go:89] found id: ""
	I1007 13:39:36.056687  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.056698  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:36.056706  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:36.056781  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:36.095332  800812 cri.go:89] found id: ""
	I1007 13:39:36.095360  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.095369  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:36.095376  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:36.095433  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:36.141361  800812 cri.go:89] found id: ""
	I1007 13:39:36.141403  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.141416  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:36.141424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:36.141485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:36.179122  800812 cri.go:89] found id: ""
	I1007 13:39:36.179155  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.179165  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:36.179171  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:36.179226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:36.212594  800812 cri.go:89] found id: ""
	I1007 13:39:36.212630  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.212642  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:36.212651  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:36.212723  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:36.253109  800812 cri.go:89] found id: ""
	I1007 13:39:36.253145  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.253156  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:36.253169  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:36.253187  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:36.327696  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:36.327729  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:36.327747  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:36.404687  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:36.404735  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:36.444913  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:36.444955  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:36.497657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:36.497711  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.013791  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:39.027274  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:39.027344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:39.061214  800812 cri.go:89] found id: ""
	I1007 13:39:39.061246  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.061254  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:39.061262  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:39.061323  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:39.096245  800812 cri.go:89] found id: ""
	I1007 13:39:39.096277  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.096288  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:39.096296  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:39.096373  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:39.137152  800812 cri.go:89] found id: ""
	I1007 13:39:39.137192  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.137204  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:39.137212  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:39.137279  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:39.172052  800812 cri.go:89] found id: ""
	I1007 13:39:39.172085  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.172094  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:39.172100  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:39.172161  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:39.208796  800812 cri.go:89] found id: ""
	I1007 13:39:39.208832  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.208843  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:39.208852  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:39.208923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:39.243568  800812 cri.go:89] found id: ""
	I1007 13:39:39.243598  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.243606  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:39.243613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:39.243669  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:39.279168  800812 cri.go:89] found id: ""
	I1007 13:39:39.279201  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.279209  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:39.279216  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:39.279276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:39.321347  800812 cri.go:89] found id: ""
	I1007 13:39:39.321373  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.321382  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:39.321391  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:39.321405  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:39.373936  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:39.373986  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.388225  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:39.388258  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:39.462454  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:39.462482  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:39.462500  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:39.545876  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:39.545931  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:40.015715  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.514458  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.094078  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:42.107800  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:42.107869  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:42.143781  800812 cri.go:89] found id: ""
	I1007 13:39:42.143818  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.143829  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:42.143837  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:42.143913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:42.186434  800812 cri.go:89] found id: ""
	I1007 13:39:42.186468  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.186479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:42.186490  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:42.186562  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:42.221552  800812 cri.go:89] found id: ""
	I1007 13:39:42.221588  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.221599  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:42.221608  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:42.221682  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:42.255536  800812 cri.go:89] found id: ""
	I1007 13:39:42.255574  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.255586  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:42.255593  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:42.255662  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:42.290067  800812 cri.go:89] found id: ""
	I1007 13:39:42.290103  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.290114  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:42.290126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:42.290197  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:42.326182  800812 cri.go:89] found id: ""
	I1007 13:39:42.326215  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.326225  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:42.326232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:42.326287  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:42.360560  800812 cri.go:89] found id: ""
	I1007 13:39:42.360594  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.360606  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:42.360616  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:42.360683  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:42.396242  800812 cri.go:89] found id: ""
	I1007 13:39:42.396270  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.396280  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:42.396291  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:42.396308  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.448101  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:42.448160  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:42.462617  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:42.462648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:42.541262  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:42.541288  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:42.541306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:42.617009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:42.617052  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.157272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:45.171699  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:45.171777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:45.213274  800812 cri.go:89] found id: ""
	I1007 13:39:45.213311  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.213322  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:45.213331  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:45.213393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:45.252304  800812 cri.go:89] found id: ""
	I1007 13:39:45.252339  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.252348  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:45.252355  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:45.252408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:45.289702  800812 cri.go:89] found id: ""
	I1007 13:39:45.289739  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.289751  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:45.289758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:45.289824  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:45.325776  800812 cri.go:89] found id: ""
	I1007 13:39:45.325815  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.325827  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:45.325836  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:45.325909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:45.362636  800812 cri.go:89] found id: ""
	I1007 13:39:45.362672  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.362683  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:45.362692  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:45.362764  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:45.405058  800812 cri.go:89] found id: ""
	I1007 13:39:45.405090  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.405100  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:45.405108  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:45.405174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:45.439752  800812 cri.go:89] found id: ""
	I1007 13:39:45.439783  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.439793  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:45.439802  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:45.439866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:45.476336  800812 cri.go:89] found id: ""
	I1007 13:39:45.476369  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.476377  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:45.476388  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:45.476402  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:45.489707  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:45.489739  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:45.564593  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:45.564626  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:45.564645  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:45.639136  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:45.639184  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.684415  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:45.684458  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.226242  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.298298  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.013741  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:47.014360  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:49.015110  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:48.245534  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:48.260357  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:48.260425  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:48.297561  800812 cri.go:89] found id: ""
	I1007 13:39:48.297591  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.297599  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:48.297606  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:48.297661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:48.332654  800812 cri.go:89] found id: ""
	I1007 13:39:48.332694  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.332705  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:48.332715  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:48.332783  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:48.370775  800812 cri.go:89] found id: ""
	I1007 13:39:48.370818  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.370829  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:48.370837  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:48.370895  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:48.409282  800812 cri.go:89] found id: ""
	I1007 13:39:48.409318  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.409329  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:48.409338  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:48.409415  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:48.448602  800812 cri.go:89] found id: ""
	I1007 13:39:48.448634  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.448642  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:48.448648  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:48.448702  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:48.483527  800812 cri.go:89] found id: ""
	I1007 13:39:48.483556  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.483565  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:48.483572  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:48.483627  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:48.519600  800812 cri.go:89] found id: ""
	I1007 13:39:48.519636  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.519645  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:48.519657  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:48.519725  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:48.559446  800812 cri.go:89] found id: ""
	I1007 13:39:48.559481  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.559493  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:48.559505  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:48.559523  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:48.575824  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:48.575879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:48.660033  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:48.660067  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:48.660083  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:48.738011  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:48.738077  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:48.781399  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:48.781439  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:50.616036  800212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.41240969s)
	I1007 13:39:50.616124  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:50.638334  800212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:50.654214  800212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:50.672345  800212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:50.672370  800212 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:50.672429  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:50.699073  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:50.699139  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:50.711774  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:50.737818  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:50.737885  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:50.749603  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.760893  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:50.760965  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.771572  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:50.781793  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:50.781856  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:50.793541  800212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:50.851411  800212 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:39:50.851486  800212 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:50.967773  800212 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:50.967938  800212 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:50.968105  800212 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:39:50.976935  800212 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:51.378305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:50.979096  800212 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:50.979227  800212 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:50.979291  800212 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:50.979375  800212 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:50.979467  800212 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:50.979560  800212 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:50.979634  800212 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:50.979717  800212 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:50.979789  800212 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:50.979857  800212 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:50.979925  800212 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:50.979959  800212 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:50.980011  800212 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:51.280206  800212 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:51.430988  800212 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:39:51.677074  800212 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:51.867985  800212 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:52.283613  800212 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:52.284108  800212 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:52.288874  800212 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.333296  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:51.346939  800812 kubeadm.go:597] duration metric: took 4m4.08487661s to restartPrimaryControlPlane
	W1007 13:39:51.347039  800812 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:51.347070  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:51.822215  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:51.841443  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:51.854663  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:51.868065  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:51.868079  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:51.868140  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:51.879052  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:51.879133  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:51.889979  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:51.901929  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:51.902007  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:51.912958  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.923420  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:51.923492  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.934307  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:51.944066  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:51.944138  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:51.954170  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:52.028915  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:39:52.028973  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:52.180138  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:52.180312  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:52.180457  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:39:52.377920  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:52.379989  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:52.380160  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:52.380267  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:52.380407  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:52.380499  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:52.380607  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:52.380700  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:52.381700  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:52.382420  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:52.383189  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:52.384091  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:52.384191  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:52.384372  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:52.769185  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:52.870841  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:52.958399  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:53.168169  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:53.192475  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:53.193447  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:53.193519  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:53.355310  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.514892  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.515960  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.358443  800812 out.go:235]   - Booting up control plane ...
	I1007 13:39:53.358593  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:53.365515  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:53.366449  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:53.367325  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:53.369598  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:39:54.454391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:52.290945  800212 out.go:235]   - Booting up control plane ...
	I1007 13:39:52.291058  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:52.291164  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:52.291610  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:52.312059  800212 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:52.318321  800212 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:52.318412  800212 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:52.456671  800212 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:39:52.456802  800212 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:39:52.958340  800212 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.579104ms
	I1007 13:39:52.958484  800212 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:39:57.959379  800212 kubeadm.go:310] [api-check] The API server is healthy after 5.001260012s
	I1007 13:39:57.980499  800212 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:39:57.999006  800212 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:39:58.043754  800212 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:39:58.044050  800212 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-653322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:39:58.062167  800212 kubeadm.go:310] [bootstrap-token] Using token: 72a6vd.dmbcvepur9l2dhmv
	I1007 13:39:58.064163  800212 out.go:235]   - Configuring RBAC rules ...
	I1007 13:39:58.064326  800212 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:39:58.079082  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:39:58.094414  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:39:58.099862  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:39:58.109846  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:39:58.122572  800212 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:39:58.370342  800212 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:39:58.808645  800212 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:39:59.367759  800212 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:39:59.368708  800212 kubeadm.go:310] 
	I1007 13:39:59.368834  800212 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:39:59.368859  800212 kubeadm.go:310] 
	I1007 13:39:59.368976  800212 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:39:59.368991  800212 kubeadm.go:310] 
	I1007 13:39:59.369031  800212 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:39:59.369111  800212 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:39:59.369155  800212 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:39:59.369162  800212 kubeadm.go:310] 
	I1007 13:39:59.369217  800212 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:39:59.369245  800212 kubeadm.go:310] 
	I1007 13:39:59.369317  800212 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:39:59.369329  800212 kubeadm.go:310] 
	I1007 13:39:59.369390  800212 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:39:59.369487  800212 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:39:59.369588  800212 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:39:59.369600  800212 kubeadm.go:310] 
	I1007 13:39:59.369722  800212 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:39:59.369826  800212 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:39:59.369838  800212 kubeadm.go:310] 
	I1007 13:39:59.369960  800212 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370113  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:39:59.370151  800212 kubeadm.go:310] 	--control-plane 
	I1007 13:39:59.370160  800212 kubeadm.go:310] 
	I1007 13:39:59.370302  800212 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:39:59.370331  800212 kubeadm.go:310] 
	I1007 13:39:59.370458  800212 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370592  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:39:59.371701  800212 kubeadm.go:310] W1007 13:39:50.802353    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372082  800212 kubeadm.go:310] W1007 13:39:50.803265    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372217  800212 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:39:59.372252  800212 cni.go:84] Creating CNI manager for ""
	I1007 13:39:59.372266  800212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:39:59.374383  800212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:39:56.015201  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:58.517383  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:00.534326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:59.376063  800212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:39:59.389097  800212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:39:59.409782  800212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:39:59.409864  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:39:59.409859  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-653322 minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=embed-certs-653322 minikube.k8s.io/primary=true
	I1007 13:39:59.451756  800212 ops.go:34] apiserver oom_adj: -16
	I1007 13:39:59.647019  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.147361  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.647505  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.147866  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.647444  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.147271  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.647066  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.147382  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.647825  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.796730  800212 kubeadm.go:1113] duration metric: took 4.386947643s to wait for elevateKubeSystemPrivileges
	I1007 13:40:03.796776  800212 kubeadm.go:394] duration metric: took 5m2.178460784s to StartCluster
	I1007 13:40:03.796802  800212 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.796927  800212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:40:03.800809  800212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.801152  800212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:40:03.801235  800212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:40:03.801341  800212 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-653322"
	I1007 13:40:03.801366  800212 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-653322"
	W1007 13:40:03.801374  800212 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:40:03.801380  800212 addons.go:69] Setting default-storageclass=true in profile "embed-certs-653322"
	I1007 13:40:03.801397  800212 addons.go:69] Setting metrics-server=true in profile "embed-certs-653322"
	I1007 13:40:03.801418  800212 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:40:03.801428  800212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-653322"
	I1007 13:40:03.801442  800212 addons.go:234] Setting addon metrics-server=true in "embed-certs-653322"
	W1007 13:40:03.801452  800212 addons.go:243] addon metrics-server should already be in state true
	I1007 13:40:03.801485  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801411  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801854  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801895  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801901  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.801908  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801937  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.802059  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.803364  800212 out.go:177] * Verifying Kubernetes components...
	I1007 13:40:03.805464  800212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:40:03.820021  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I1007 13:40:03.820297  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1007 13:40:03.820632  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.820812  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.821460  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821482  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.821598  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I1007 13:40:03.821627  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821639  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.822131  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822377  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.822388  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822769  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822823  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.822938  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822990  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.823583  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.823609  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.824011  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.824209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.828672  800212 addons.go:234] Setting addon default-storageclass=true in "embed-certs-653322"
	W1007 13:40:03.828697  800212 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:40:03.828731  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.829118  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.829169  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.839251  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1007 13:40:03.839800  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.840506  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.840533  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.840894  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.841130  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.842660  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1007 13:40:03.843181  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.843235  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.843819  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.843841  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.844191  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.844469  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.845247  800212 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:40:03.846191  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.846688  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:40:03.846712  800212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:40:03.846737  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.847801  800212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:40:01.015857  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.515782  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.849482  800212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:03.849504  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:40:03.849528  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.851923  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852765  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.852798  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852987  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.853209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.853367  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.853482  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.854532  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I1007 13:40:03.854540  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855100  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.855127  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855438  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.855484  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.855836  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.856149  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.856179  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.856258  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.856436  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.856791  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.857523  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.857572  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.873780  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I1007 13:40:03.874162  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.874943  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.874958  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.875358  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.875581  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.877658  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.877924  800212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:03.877940  800212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:40:03.877962  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.881764  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882241  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.882272  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882619  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.882839  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.882999  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.883146  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:04.059493  800212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:40:04.092602  800212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135614  800212 node_ready.go:49] node "embed-certs-653322" has status "Ready":"True"
	I1007 13:40:04.135639  800212 node_ready.go:38] duration metric: took 42.999262ms for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135649  800212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:04.168633  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:04.177323  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:04.206431  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:04.358331  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:40:04.358360  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:40:04.453932  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:40:04.453978  800212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:40:04.543045  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:04.543079  800212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:40:04.628016  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:05.373199  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166722968s)
	I1007 13:40:05.373269  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373286  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373188  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195822413s)
	I1007 13:40:05.373374  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373395  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373726  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373746  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373756  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373764  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373772  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.373786  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373798  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373810  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373819  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.374033  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374019  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374056  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.374077  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374104  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374123  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.449400  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.449435  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.449768  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.449785  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034194  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.406118465s)
	I1007 13:40:06.034270  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034292  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034583  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034603  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034613  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034620  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034852  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:06.034920  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034951  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034967  800212 addons.go:475] Verifying addon metrics-server=true in "embed-certs-653322"
	I1007 13:40:06.036901  800212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:40:03.602357  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:06.038108  800212 addons.go:510] duration metric: took 2.236891318s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:40:06.178973  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:06.015270  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.514554  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.675453  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:10.182593  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.182620  800212 pod_ready.go:82] duration metric: took 6.013956349s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.182630  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189183  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.189216  800212 pod_ready.go:82] duration metric: took 6.578623ms for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189229  800212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195272  800212 pod_ready.go:93] pod "etcd-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.195298  800212 pod_ready.go:82] duration metric: took 6.06024ms for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195308  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203341  800212 pod_ready.go:93] pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.203365  800212 pod_ready.go:82] duration metric: took 8.050464ms for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203375  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209333  800212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.209364  800212 pod_ready.go:82] duration metric: took 5.980877ms for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209377  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573541  800212 pod_ready.go:93] pod "kube-proxy-z9r92" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.573574  800212 pod_ready.go:82] duration metric: took 364.188673ms for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573586  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973294  800212 pod_ready.go:93] pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.973325  800212 pod_ready.go:82] duration metric: took 399.732244ms for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973334  800212 pod_ready.go:39] duration metric: took 6.837673484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:10.973354  800212 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:40:10.973424  800212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:40:10.989629  800212 api_server.go:72] duration metric: took 7.188432004s to wait for apiserver process to appear ...
	I1007 13:40:10.989661  800212 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:40:10.989690  800212 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I1007 13:40:10.994679  800212 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I1007 13:40:10.995855  800212 api_server.go:141] control plane version: v1.31.1
	I1007 13:40:10.995882  800212 api_server.go:131] duration metric: took 6.212413ms to wait for apiserver health ...
	I1007 13:40:10.995894  800212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:40:11.176174  800212 system_pods.go:59] 9 kube-system pods found
	I1007 13:40:11.176207  800212 system_pods.go:61] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.176213  800212 system_pods.go:61] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.176217  800212 system_pods.go:61] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.176221  800212 system_pods.go:61] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.176225  800212 system_pods.go:61] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.176228  800212 system_pods.go:61] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.176231  800212 system_pods.go:61] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.176236  800212 system_pods.go:61] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.176240  800212 system_pods.go:61] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.176251  800212 system_pods.go:74] duration metric: took 180.350309ms to wait for pod list to return data ...
	I1007 13:40:11.176258  800212 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:40:11.374362  800212 default_sa.go:45] found service account: "default"
	I1007 13:40:11.374397  800212 default_sa.go:55] duration metric: took 198.130993ms for default service account to be created ...
	I1007 13:40:11.374410  800212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:40:11.577087  800212 system_pods.go:86] 9 kube-system pods found
	I1007 13:40:11.577124  800212 system_pods.go:89] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.577130  800212 system_pods.go:89] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.577134  800212 system_pods.go:89] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.577138  800212 system_pods.go:89] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.577141  800212 system_pods.go:89] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.577145  800212 system_pods.go:89] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.577149  800212 system_pods.go:89] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.577157  800212 system_pods.go:89] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.577161  800212 system_pods.go:89] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.577171  800212 system_pods.go:126] duration metric: took 202.754732ms to wait for k8s-apps to be running ...
	I1007 13:40:11.577179  800212 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:40:11.577228  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:40:11.595122  800212 system_svc.go:56] duration metric: took 17.926197ms WaitForService to wait for kubelet
	I1007 13:40:11.595159  800212 kubeadm.go:582] duration metric: took 7.793966621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:40:11.595189  800212 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:40:11.774788  800212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:40:11.774819  800212 node_conditions.go:123] node cpu capacity is 2
	I1007 13:40:11.774833  800212 node_conditions.go:105] duration metric: took 179.638486ms to run NodePressure ...
	I1007 13:40:11.774845  800212 start.go:241] waiting for startup goroutines ...
	I1007 13:40:11.774852  800212 start.go:246] waiting for cluster config update ...
	I1007 13:40:11.774862  800212 start.go:255] writing updated cluster config ...
	I1007 13:40:11.775199  800212 ssh_runner.go:195] Run: rm -f paused
	I1007 13:40:11.829109  800212 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:40:11.831389  800212 out.go:177] * Done! kubectl is now configured to use "embed-certs-653322" cluster and "default" namespace by default
	I1007 13:40:09.682305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:11.014595  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:13.514109  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:12.754391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:16.015105  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.513935  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.834414  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.906376  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.015129  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:23.518245  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:26.014981  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:28.513904  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:27.986365  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.058375  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.015269  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.514729  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.370670  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:40:33.371065  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:33.371255  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:36.013424  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.014881  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.507584  800087 pod_ready.go:82] duration metric: took 4m0.000325195s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" ...
	E1007 13:40:38.507633  800087 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:40:38.507657  800087 pod_ready.go:39] duration metric: took 4m14.542185527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:38.507694  800087 kubeadm.go:597] duration metric: took 4m21.215120888s to restartPrimaryControlPlane
	W1007 13:40:38.507784  800087 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:40:38.507824  800087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:40:38.371494  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:38.371681  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:37.138368  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:40.210391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:46.290312  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:48.371961  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:48.372225  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:49.362313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:55.442333  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:58.514279  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:04.757708  800087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.249856079s)
	I1007 13:41:04.757796  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:04.787393  800087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:41:04.805311  800087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:04.819815  800087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:04.819839  800087 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:04.819889  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:04.832607  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:04.832673  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:04.847624  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:04.859808  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:04.859890  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:04.886041  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.896677  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:04.896746  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.906688  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:04.915884  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:04.915965  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:04.925767  800087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:04.981704  800087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:41:04.981799  800087 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:05.104530  800087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:05.104648  800087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:05.104750  800087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:41:05.114782  800087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:05.116948  800087 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:05.117074  800087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:05.117168  800087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:05.117275  800087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:05.117358  800087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:05.117447  800087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:05.117522  800087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:05.117620  800087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:05.117733  800087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:05.117851  800087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:05.117961  800087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:05.118055  800087 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:05.118147  800087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:05.216990  800087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:05.548814  800087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:41:05.921322  800087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:06.206950  800087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:06.412087  800087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:06.412698  800087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:06.415768  800087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:04.598286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:06.418055  800087 out.go:235]   - Booting up control plane ...
	I1007 13:41:06.418195  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:06.419324  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:06.420095  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:06.437974  800087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:06.447497  800087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:06.447580  800087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:06.582080  800087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:41:06.582223  800087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:41:07.583021  800087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001204833s
	I1007 13:41:07.583165  800087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:41:08.372715  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:08.372913  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:07.666427  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:13.085728  800087 kubeadm.go:310] [api-check] The API server is healthy after 5.502732546s
	I1007 13:41:13.105047  800087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:41:13.122083  800087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:41:13.157464  800087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:41:13.157751  800087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-016701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:41:13.176062  800087 kubeadm.go:310] [bootstrap-token] Using token: ott6bx.mfcul37ilsfpftrr
	I1007 13:41:13.177574  800087 out.go:235]   - Configuring RBAC rules ...
	I1007 13:41:13.177739  800087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:41:13.184629  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:41:13.200989  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:41:13.206521  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:41:13.212338  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:41:13.217063  800087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:41:13.493012  800087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:41:13.926154  800087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:41:14.500818  800087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:41:14.500844  800087 kubeadm.go:310] 
	I1007 13:41:14.500894  800087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:41:14.500899  800087 kubeadm.go:310] 
	I1007 13:41:14.500988  800087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:41:14.501001  800087 kubeadm.go:310] 
	I1007 13:41:14.501041  800087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:41:14.501095  800087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:41:14.501196  800087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:41:14.501223  800087 kubeadm.go:310] 
	I1007 13:41:14.501307  800087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:41:14.501316  800087 kubeadm.go:310] 
	I1007 13:41:14.501379  800087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:41:14.501448  800087 kubeadm.go:310] 
	I1007 13:41:14.501533  800087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:41:14.501629  800087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:41:14.501733  800087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:41:14.501750  800087 kubeadm.go:310] 
	I1007 13:41:14.501854  800087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:41:14.501964  800087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:41:14.501973  800087 kubeadm.go:310] 
	I1007 13:41:14.502109  800087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502269  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:41:14.502311  800087 kubeadm.go:310] 	--control-plane 
	I1007 13:41:14.502322  800087 kubeadm.go:310] 
	I1007 13:41:14.502443  800087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:41:14.502453  800087 kubeadm.go:310] 
	I1007 13:41:14.502600  800087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502755  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:41:14.503943  800087 kubeadm.go:310] W1007 13:41:04.948448    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504331  800087 kubeadm.go:310] W1007 13:41:04.949311    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504448  800087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:14.504466  800087 cni.go:84] Creating CNI manager for ""
	I1007 13:41:14.504474  800087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:41:14.506680  800087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:41:14.508369  800087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:41:14.520414  800087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:41:14.544669  800087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:41:14.544833  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:14.544851  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-016701 minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=no-preload-016701 minikube.k8s.io/primary=true
	I1007 13:41:14.772594  800087 ops.go:34] apiserver oom_adj: -16
	I1007 13:41:14.772619  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:13.746372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:16.822393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:15.273211  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:15.772786  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.273580  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.773395  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.272868  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.773484  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.273717  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.405010  800087 kubeadm.go:1113] duration metric: took 3.86025273s to wait for elevateKubeSystemPrivileges
	I1007 13:41:18.405055  800087 kubeadm.go:394] duration metric: took 5m1.164485599s to StartCluster
	I1007 13:41:18.405081  800087 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.405182  800087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:41:18.406935  800087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.407244  800087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.197 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:41:18.407398  800087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:41:18.407513  800087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-016701"
	I1007 13:41:18.407539  800087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-016701"
	W1007 13:41:18.407549  800087 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:41:18.407548  800087 addons.go:69] Setting default-storageclass=true in profile "no-preload-016701"
	I1007 13:41:18.407572  800087 addons.go:69] Setting metrics-server=true in profile "no-preload-016701"
	I1007 13:41:18.407615  800087 addons.go:234] Setting addon metrics-server=true in "no-preload-016701"
	W1007 13:41:18.407721  800087 addons.go:243] addon metrics-server should already be in state true
	I1007 13:41:18.407850  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407591  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407545  800087 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:41:18.407594  800087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-016701"
	I1007 13:41:18.408374  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408387  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408417  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408424  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408447  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408542  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.409406  800087 out.go:177] * Verifying Kubernetes components...
	I1007 13:41:18.411018  800087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:41:18.425614  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I1007 13:41:18.426275  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.426764  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I1007 13:41:18.426926  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.426956  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427308  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.427410  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.427840  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.427862  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427976  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.428024  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.428257  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.428470  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.428478  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I1007 13:41:18.428980  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.429578  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.429605  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.429927  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.430564  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.430602  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.431895  800087 addons.go:234] Setting addon default-storageclass=true in "no-preload-016701"
	W1007 13:41:18.431918  800087 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:41:18.431952  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.432279  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.432319  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.445003  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1007 13:41:18.445514  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.445968  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1007 13:41:18.446101  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.446125  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.446534  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.446580  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.446821  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.447159  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.447187  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.447559  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.447754  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.449595  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.450543  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.452177  800087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:41:18.452788  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I1007 13:41:18.453311  800087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:41:18.453332  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.454421  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.454443  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.454767  800087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.454791  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:41:18.454813  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.454902  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.455260  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:41:18.455277  800087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:41:18.455293  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.455514  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.455574  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.458904  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459133  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459321  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459529  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459681  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459699  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459704  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.459849  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.459962  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459994  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.460161  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.460349  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.460480  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.495484  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1007 13:41:18.496027  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.496790  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.496828  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.497324  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.497537  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.499178  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.499425  800087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.499440  800087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:41:18.499457  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.502808  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503337  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.503363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503573  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.503796  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.503972  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.504135  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.607501  800087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:41:18.631538  800087 node_ready.go:35] waiting up to 6m0s for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645041  800087 node_ready.go:49] node "no-preload-016701" has status "Ready":"True"
	I1007 13:41:18.645065  800087 node_ready.go:38] duration metric: took 13.492405ms for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645076  800087 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:18.651831  800087 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:18.689502  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.714363  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:41:18.714386  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:41:18.738095  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.794344  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:41:18.794384  800087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:41:18.906848  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:18.906886  800087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:41:18.991553  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:19.434333  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434360  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434687  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.434701  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434710  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434716  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434932  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434987  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435004  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.435015  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434993  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435269  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435274  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435282  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.435290  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.435297  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.436889  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.436909  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.456678  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.456714  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.457112  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.457133  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.457164  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.382548  800087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.390945966s)
	I1007 13:41:20.382614  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.382628  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.382952  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383052  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383068  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.383077  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.383010  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.383354  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383370  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383384  800087 addons.go:475] Verifying addon metrics-server=true in "no-preload-016701"
	I1007 13:41:20.385366  800087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:41:20.386603  800087 addons.go:510] duration metric: took 1.979211294s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:41:20.665725  800087 pod_ready.go:103] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"False"
	I1007 13:41:22.158063  800087 pod_ready.go:93] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:22.158090  800087 pod_ready.go:82] duration metric: took 3.506228901s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:22.158100  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165304  800087 pod_ready.go:93] pod "kube-apiserver-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.165330  800087 pod_ready.go:82] duration metric: took 2.007223213s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165340  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172907  800087 pod_ready.go:93] pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.172930  800087 pod_ready.go:82] duration metric: took 7.583143ms for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172939  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180216  800087 pod_ready.go:93] pod "kube-proxy-bjqg2" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.180243  800087 pod_ready.go:82] duration metric: took 7.297732ms for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180255  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185080  800087 pod_ready.go:93] pod "kube-scheduler-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.185108  800087 pod_ready.go:82] duration metric: took 4.84549ms for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185119  800087 pod_ready.go:39] duration metric: took 5.540032302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:24.185141  800087 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:41:24.185197  800087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:41:24.201360  800087 api_server.go:72] duration metric: took 5.794073168s to wait for apiserver process to appear ...
	I1007 13:41:24.201464  800087 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:41:24.201496  800087 api_server.go:253] Checking apiserver healthz at https://192.168.39.197:8443/healthz ...
	I1007 13:41:24.207141  800087 api_server.go:279] https://192.168.39.197:8443/healthz returned 200:
	ok
	I1007 13:41:24.208456  800087 api_server.go:141] control plane version: v1.31.1
	I1007 13:41:24.208481  800087 api_server.go:131] duration metric: took 7.007742ms to wait for apiserver health ...
	I1007 13:41:24.208491  800087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:41:24.213660  800087 system_pods.go:59] 9 kube-system pods found
	I1007 13:41:24.213693  800087 system_pods.go:61] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213701  800087 system_pods.go:61] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213711  800087 system_pods.go:61] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.213716  800087 system_pods.go:61] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.213719  800087 system_pods.go:61] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.213722  800087 system_pods.go:61] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.213725  800087 system_pods.go:61] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.213730  800087 system_pods.go:61] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.213734  800087 system_pods.go:61] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.213742  800087 system_pods.go:74] duration metric: took 5.244677ms to wait for pod list to return data ...
	I1007 13:41:24.213749  800087 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:41:24.216891  800087 default_sa.go:45] found service account: "default"
	I1007 13:41:24.216923  800087 default_sa.go:55] duration metric: took 3.165762ms for default service account to be created ...
	I1007 13:41:24.216936  800087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:41:24.366926  800087 system_pods.go:86] 9 kube-system pods found
	I1007 13:41:24.366962  800087 system_pods.go:89] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366970  800087 system_pods.go:89] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366977  800087 system_pods.go:89] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.366982  800087 system_pods.go:89] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.366986  800087 system_pods.go:89] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.366990  800087 system_pods.go:89] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.366993  800087 system_pods.go:89] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.366998  800087 system_pods.go:89] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.367001  800087 system_pods.go:89] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.367011  800087 system_pods.go:126] duration metric: took 150.068129ms to wait for k8s-apps to be running ...
	I1007 13:41:24.367018  800087 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:41:24.367064  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:24.383197  800087 system_svc.go:56] duration metric: took 16.165166ms WaitForService to wait for kubelet
	I1007 13:41:24.383232  800087 kubeadm.go:582] duration metric: took 5.975954284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:41:24.383256  800087 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:41:24.563433  800087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:41:24.563469  800087 node_conditions.go:123] node cpu capacity is 2
	I1007 13:41:24.563486  800087 node_conditions.go:105] duration metric: took 180.224622ms to run NodePressure ...
	I1007 13:41:24.563503  800087 start.go:241] waiting for startup goroutines ...
	I1007 13:41:24.563514  800087 start.go:246] waiting for cluster config update ...
	I1007 13:41:24.563529  800087 start.go:255] writing updated cluster config ...
	I1007 13:41:24.563898  800087 ssh_runner.go:195] Run: rm -f paused
	I1007 13:41:24.619289  800087 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:41:24.621527  800087 out.go:177] * Done! kubectl is now configured to use "no-preload-016701" cluster and "default" namespace by default
	I1007 13:41:22.898326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:25.970388  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:32.050353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:35.122329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:41.202320  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:44.274335  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:48.374723  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:48.375006  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.375034  800812 kubeadm.go:310] 
	I1007 13:41:48.375075  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:41:48.375132  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:41:48.375140  800812 kubeadm.go:310] 
	I1007 13:41:48.375183  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:41:48.375231  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:41:48.375369  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:41:48.375392  800812 kubeadm.go:310] 
	I1007 13:41:48.375514  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:41:48.375568  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:41:48.375617  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:41:48.375626  800812 kubeadm.go:310] 
	I1007 13:41:48.375747  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:41:48.375877  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:41:48.375895  800812 kubeadm.go:310] 
	I1007 13:41:48.376053  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:41:48.376140  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:41:48.376211  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:41:48.376288  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:41:48.376302  800812 kubeadm.go:310] 
	I1007 13:41:48.376705  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:48.376830  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:41:48.376948  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:41:48.377115  800812 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:41:48.377169  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:41:48.848117  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:48.863751  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:48.874610  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:48.874642  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:48.874715  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:48.886201  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:48.886279  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:48.897494  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:48.908398  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:48.908481  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:48.921409  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.931814  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:48.931882  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.943484  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:48.955060  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:48.955245  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:48.966391  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:49.042441  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:41:49.042521  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:49.203488  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:49.203603  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:49.203715  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:41:49.410381  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:49.412411  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:49.412520  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:49.412591  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:49.412694  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:49.412816  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:49.412940  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:49.412999  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:49.413053  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:49.413105  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:49.413196  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:49.413283  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:49.413319  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:49.413396  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:49.634922  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:49.724221  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:49.804768  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:49.980061  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:50.000515  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:50.000858  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:50.001053  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:50.163951  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:50.166163  800812 out.go:235]   - Booting up control plane ...
	I1007 13:41:50.166331  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:50.180837  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:50.181963  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:50.184140  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:50.190548  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:41:50.354360  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:53.426359  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:59.510321  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:02.578322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:08.658292  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:11.730352  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:17.810322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:20.882397  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:26.962343  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:30.192477  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:42:30.192790  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:30.193025  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:30.034345  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:35.193544  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:35.193820  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:36.114353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:39.186453  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:45.194245  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:45.194449  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:45.266293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:48.338329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:54.418332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:57.490294  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:05.194833  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:05.195103  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:03.570372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:06.642286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:09.643253  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:09.643290  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643598  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:09.643627  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:09.645347  802960 machine.go:96] duration metric: took 4m37.397836997s to provisionDockerMachine
	I1007 13:43:09.645389  802960 fix.go:56] duration metric: took 4m37.421085967s for fixHost
	I1007 13:43:09.645394  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 4m37.421104002s
	W1007 13:43:09.645409  802960 start.go:714] error starting host: provision: host is not running
	W1007 13:43:09.645530  802960 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 13:43:09.645542  802960 start.go:729] Will try again in 5 seconds ...
	I1007 13:43:14.646206  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:43:14.646330  802960 start.go:364] duration metric: took 74.211µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:43:14.646374  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:43:14.646382  802960 fix.go:54] fixHost starting: 
	I1007 13:43:14.646717  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:43:14.646746  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:43:14.662426  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I1007 13:43:14.663016  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:43:14.663790  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:43:14.663822  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:43:14.664176  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:43:14.664429  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:14.664605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:43:14.666440  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Stopped err=<nil>
	I1007 13:43:14.666467  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	W1007 13:43:14.666648  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:43:14.668507  802960 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-489319" ...
	I1007 13:43:14.669973  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Start
	I1007 13:43:14.670294  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring networks are active...
	I1007 13:43:14.671299  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network default is active
	I1007 13:43:14.671623  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network mk-default-k8s-diff-port-489319 is active
	I1007 13:43:14.672332  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Getting domain xml...
	I1007 13:43:14.673106  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Creating domain...
	I1007 13:43:15.035227  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting to get IP...
	I1007 13:43:15.036226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036673  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036768  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.036657  804186 retry.go:31] will retry after 204.852009ms: waiting for machine to come up
	I1007 13:43:15.243827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244610  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244699  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.244581  804186 retry.go:31] will retry after 334.887784ms: waiting for machine to come up
	I1007 13:43:15.581226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581717  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581747  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.581665  804186 retry.go:31] will retry after 354.992125ms: waiting for machine to come up
	I1007 13:43:15.938078  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938577  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938614  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.938518  804186 retry.go:31] will retry after 592.784389ms: waiting for machine to come up
	I1007 13:43:16.533531  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534103  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534128  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:16.534054  804186 retry.go:31] will retry after 756.034822ms: waiting for machine to come up
	I1007 13:43:17.291995  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292785  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292807  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:17.292736  804186 retry.go:31] will retry after 896.816081ms: waiting for machine to come up
	I1007 13:43:18.191016  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191527  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191560  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:18.191466  804186 retry.go:31] will retry after 1.08609499s: waiting for machine to come up
	I1007 13:43:19.280109  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280537  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280576  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:19.280520  804186 retry.go:31] will retry after 1.392221474s: waiting for machine to come up
	I1007 13:43:20.674622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675071  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675115  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:20.675031  804186 retry.go:31] will retry after 1.78021676s: waiting for machine to come up
	I1007 13:43:22.457647  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458248  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:22.458160  804186 retry.go:31] will retry after 2.117086662s: waiting for machine to come up
	I1007 13:43:24.576838  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577415  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577445  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:24.577364  804186 retry.go:31] will retry after 2.850833043s: waiting for machine to come up
	I1007 13:43:27.432222  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432855  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432882  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:27.432789  804186 retry.go:31] will retry after 3.63047619s: waiting for machine to come up
	I1007 13:43:31.065089  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.065729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Found IP for machine: 192.168.61.101
	I1007 13:43:31.065759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserving static IP address...
	I1007 13:43:31.065782  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has current primary IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.066317  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.066362  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserved static IP address: 192.168.61.101
	I1007 13:43:31.066395  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | skip adding static IP to network mk-default-k8s-diff-port-489319 - found existing host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"}
	I1007 13:43:31.066407  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for SSH to be available...
	I1007 13:43:31.066449  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Getting to WaitForSSH function...
	I1007 13:43:31.068871  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069233  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.069265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH client type: external
	I1007 13:43:31.069398  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa (-rw-------)
	I1007 13:43:31.069451  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:43:31.069466  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | About to run SSH command:
	I1007 13:43:31.069475  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | exit 0
	I1007 13:43:31.194580  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | SSH cmd err, output: <nil>: 
	I1007 13:43:31.195021  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetConfigRaw
	I1007 13:43:31.195801  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.198966  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199324  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.199359  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199635  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:43:31.199893  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:43:31.199919  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:31.200168  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.202444  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202817  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.202849  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202989  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.203185  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203352  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.203683  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.203930  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.203943  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:43:31.307182  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:43:31.307224  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307497  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:31.307525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307722  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.310462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.310835  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.310905  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.311014  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.311192  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311437  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311613  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.311794  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.311969  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.311981  802960 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489319 && echo "default-k8s-diff-port-489319" | sudo tee /etc/hostname
	I1007 13:43:31.436251  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489319
	
	I1007 13:43:31.436288  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.439927  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440241  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.440276  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440616  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.440887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441042  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441197  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.441360  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.441584  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.441612  802960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489319/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:43:31.552909  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:31.552947  802960 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:43:31.552983  802960 buildroot.go:174] setting up certificates
	I1007 13:43:31.553002  802960 provision.go:84] configureAuth start
	I1007 13:43:31.553012  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.553454  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.556642  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557015  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.557055  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.559909  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560460  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.560487  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560719  802960 provision.go:143] copyHostCerts
	I1007 13:43:31.560792  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:43:31.560812  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:43:31.560889  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:43:31.561045  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:43:31.561058  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:43:31.561084  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:43:31.561171  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:43:31.561180  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:43:31.561208  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:43:31.561271  802960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489319 san=[127.0.0.1 192.168.61.101 default-k8s-diff-port-489319 localhost minikube]
	I1007 13:43:31.871377  802960 provision.go:177] copyRemoteCerts
	I1007 13:43:31.871459  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:43:31.871489  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.874464  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.874887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.874925  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.875112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.875368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.875547  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.875675  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:31.957423  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:43:31.988554  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1007 13:43:32.018470  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:43:32.046799  802960 provision.go:87] duration metric: took 493.782862ms to configureAuth
	I1007 13:43:32.046830  802960 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:43:32.047021  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:43:32.047151  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.050313  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.050727  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.050760  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.051011  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.051216  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051385  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051522  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.051685  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.051878  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.051893  802960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:43:32.291927  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:43:32.291957  802960 machine.go:96] duration metric: took 1.092049658s to provisionDockerMachine
	I1007 13:43:32.291970  802960 start.go:293] postStartSetup for "default-k8s-diff-port-489319" (driver="kvm2")
	I1007 13:43:32.291985  802960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:43:32.292025  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.292491  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:43:32.292523  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.296195  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296625  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.296660  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296889  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.297104  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.297300  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.297479  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.377749  802960 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:43:32.382419  802960 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:43:32.382459  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:43:32.382557  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:43:32.382663  802960 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:43:32.382767  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:43:32.394059  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:32.422256  802960 start.go:296] duration metric: took 130.264438ms for postStartSetup
	I1007 13:43:32.422310  802960 fix.go:56] duration metric: took 17.775926417s for fixHost
	I1007 13:43:32.422340  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.425739  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.426254  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.426678  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426941  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.427080  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.427294  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.427305  802960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:43:32.531411  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728308612.494637714
	
	I1007 13:43:32.531442  802960 fix.go:216] guest clock: 1728308612.494637714
	I1007 13:43:32.531450  802960 fix.go:229] Guest: 2024-10-07 13:43:32.494637714 +0000 UTC Remote: 2024-10-07 13:43:32.422315329 +0000 UTC m=+300.358475670 (delta=72.322385ms)
	I1007 13:43:32.531474  802960 fix.go:200] guest clock delta is within tolerance: 72.322385ms
	I1007 13:43:32.531480  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 17.885135029s
	I1007 13:43:32.531503  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.531787  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:32.534783  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.535265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535472  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536178  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536404  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536518  802960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:43:32.536581  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.536697  802960 ssh_runner.go:195] Run: cat /version.json
	I1007 13:43:32.536729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.539709  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.539743  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540166  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540202  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540348  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540417  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540598  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540638  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540762  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.540777  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540884  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.540947  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.541089  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.642238  802960 ssh_runner.go:195] Run: systemctl --version
	I1007 13:43:32.649391  802960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:43:32.799266  802960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:43:32.805598  802960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:43:32.805707  802960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:43:32.823518  802960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:43:32.823560  802960 start.go:495] detecting cgroup driver to use...
	I1007 13:43:32.823651  802960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:43:32.842054  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:43:32.858474  802960 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:43:32.858550  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:43:32.873750  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:43:32.889165  802960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:43:33.019729  802960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:43:33.182269  802960 docker.go:233] disabling docker service ...
	I1007 13:43:33.182371  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:43:33.198610  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:43:33.213911  802960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:43:33.343594  802960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:43:33.476026  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:43:33.493130  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:43:33.513584  802960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:43:33.513652  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.525714  802960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:43:33.525816  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.538658  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.551146  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.564914  802960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:43:33.578180  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.590140  802960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.610967  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.624890  802960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:43:33.636736  802960 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:43:33.636825  802960 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:43:33.652573  802960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:43:33.665083  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:33.800780  802960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:43:33.898225  802960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:43:33.898309  802960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:43:33.903209  802960 start.go:563] Will wait 60s for crictl version
	I1007 13:43:33.903269  802960 ssh_runner.go:195] Run: which crictl
	I1007 13:43:33.907326  802960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:43:33.959008  802960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:43:33.959168  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:33.990929  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:34.023756  802960 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:43:34.025496  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:34.028784  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029327  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:34.029360  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029672  802960 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1007 13:43:34.034690  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:34.048101  802960 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:43:34.048259  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:43:34.048325  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:34.086926  802960 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:43:34.087050  802960 ssh_runner.go:195] Run: which lz4
	I1007 13:43:34.091973  802960 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:43:34.096623  802960 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:43:34.096671  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:43:35.604800  802960 crio.go:462] duration metric: took 1.512877493s to copy over tarball
	I1007 13:43:35.604892  802960 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:43:37.805292  802960 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200363211s)
	I1007 13:43:37.805327  802960 crio.go:469] duration metric: took 2.200488229s to extract the tarball
	I1007 13:43:37.805338  802960 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:43:37.845477  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:37.895532  802960 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:43:37.895562  802960 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:43:37.895574  802960 kubeadm.go:934] updating node { 192.168.61.101 8444 v1.31.1 crio true true} ...
	I1007 13:43:37.895725  802960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-489319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:43:37.895804  802960 ssh_runner.go:195] Run: crio config
	I1007 13:43:37.949367  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:37.949395  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:37.949410  802960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:43:37.949433  802960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.101 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-489319 NodeName:default-k8s-diff-port-489319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:43:37.949576  802960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.101
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-489319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:43:37.949659  802960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:43:37.959941  802960 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:43:37.960076  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:43:37.970766  802960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1007 13:43:37.989311  802960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:43:38.009634  802960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1007 13:43:38.027642  802960 ssh_runner.go:195] Run: grep 192.168.61.101	control-plane.minikube.internal$ /etc/hosts
	I1007 13:43:38.031764  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:38.044131  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:38.185253  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:43:38.212538  802960 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319 for IP: 192.168.61.101
	I1007 13:43:38.212565  802960 certs.go:194] generating shared ca certs ...
	I1007 13:43:38.212589  802960 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:43:38.212799  802960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:43:38.212859  802960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:43:38.212873  802960 certs.go:256] generating profile certs ...
	I1007 13:43:38.212997  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/client.key
	I1007 13:43:38.213082  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key.f1e25377
	I1007 13:43:38.213153  802960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key
	I1007 13:43:38.213325  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:43:38.213365  802960 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:43:38.213390  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:43:38.213425  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:43:38.213471  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:43:38.213501  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:43:38.213559  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:38.214588  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:43:38.266516  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:43:38.305985  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:43:38.353490  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:43:38.380638  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 13:43:38.424440  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:43:38.452428  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:43:38.480709  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:43:38.509639  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:43:38.536940  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:43:38.564021  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:43:38.591067  802960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:43:38.609218  802960 ssh_runner.go:195] Run: openssl version
	I1007 13:43:38.616235  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:43:38.629007  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634324  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634400  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.641330  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:43:38.654384  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:43:38.667134  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672330  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672407  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.678719  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:43:38.690565  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:43:38.705158  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710787  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710868  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.717093  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:43:38.729957  802960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:43:38.735559  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:43:38.742580  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:43:38.749684  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:43:38.756534  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:43:38.762897  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:43:38.770450  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:43:38.777701  802960 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:43:38.777813  802960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:43:38.777880  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.822678  802960 cri.go:89] found id: ""
	I1007 13:43:38.822746  802960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:43:38.833436  802960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:43:38.833463  802960 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:43:38.833516  802960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:43:38.844226  802960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:43:38.845383  802960 kubeconfig.go:125] found "default-k8s-diff-port-489319" server: "https://192.168.61.101:8444"
	I1007 13:43:38.848063  802960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:43:38.859087  802960 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.101
	I1007 13:43:38.859129  802960 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:43:38.859142  802960 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:43:38.859221  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.902955  802960 cri.go:89] found id: ""
	I1007 13:43:38.903054  802960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:43:38.920556  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:43:38.930998  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:43:38.931027  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:43:38.931095  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:43:38.940538  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:43:38.940608  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:43:38.951198  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:43:38.960653  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:43:38.960746  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:43:38.970800  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.981094  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:43:38.981176  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.991845  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:43:39.001966  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:43:39.002080  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:43:39.014014  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:43:39.026304  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:39.157169  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.098491  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.941274215s)
	I1007 13:43:41.098539  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.310925  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.402330  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.502763  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:43:41.502864  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.003197  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:45.194317  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:45.194637  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194670  800812 kubeadm.go:310] 
	I1007 13:43:45.194721  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:43:45.194779  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:43:45.194789  800812 kubeadm.go:310] 
	I1007 13:43:45.194832  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:43:45.194873  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:43:45.195053  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:43:45.195079  800812 kubeadm.go:310] 
	I1007 13:43:45.195219  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:43:45.195259  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:43:45.195300  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:43:45.195309  800812 kubeadm.go:310] 
	I1007 13:43:45.195434  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:43:45.195533  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:43:45.195542  800812 kubeadm.go:310] 
	I1007 13:43:45.195691  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:43:45.195814  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:43:45.195912  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:43:45.196007  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:43:45.196018  800812 kubeadm.go:310] 
	I1007 13:43:45.196865  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:43:45.197021  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:43:45.197130  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:43:45.197242  800812 kubeadm.go:394] duration metric: took 7m57.99434545s to StartCluster
	I1007 13:43:45.197299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:43:45.197368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:43:45.245334  800812 cri.go:89] found id: ""
	I1007 13:43:45.245369  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.245380  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:43:45.245390  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:43:45.245464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:43:45.287324  800812 cri.go:89] found id: ""
	I1007 13:43:45.287363  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.287375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:43:45.287384  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:43:45.287464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:43:45.323565  800812 cri.go:89] found id: ""
	I1007 13:43:45.323606  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.323619  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:43:45.323627  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:43:45.323708  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:43:45.365920  800812 cri.go:89] found id: ""
	I1007 13:43:45.365955  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.365967  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:43:45.365976  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:43:45.366052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:43:45.409136  800812 cri.go:89] found id: ""
	I1007 13:43:45.409177  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.409189  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:43:45.409199  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:43:45.409268  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:43:45.455631  800812 cri.go:89] found id: ""
	I1007 13:43:45.455667  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.455676  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:43:45.455683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:43:45.455746  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:43:45.512092  800812 cri.go:89] found id: ""
	I1007 13:43:45.512134  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.512146  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:43:45.512155  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:43:45.512223  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:43:45.561541  800812 cri.go:89] found id: ""
	I1007 13:43:45.561579  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.561592  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:43:45.561614  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:43:45.561635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:43:45.609728  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:43:45.609765  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:43:45.662962  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:43:45.663007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:43:45.680441  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:43:45.680496  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:43:45.768165  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:43:45.768198  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:43:45.768214  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:43:45.889172  800812 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:43:45.889245  800812 out.go:270] * 
	W1007 13:43:45.889310  800812 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.889324  800812 out.go:270] * 
	W1007 13:43:45.890214  800812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:43:45.893670  800812 out.go:201] 
	W1007 13:43:45.895121  800812 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.895161  800812 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:43:45.895184  800812 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:43:45.896672  800812 out.go:201] 
	I1007 13:43:42.503307  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.523040  802960 api_server.go:72] duration metric: took 1.020293575s to wait for apiserver process to appear ...
	I1007 13:43:42.523069  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:43:42.523093  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:42.523750  802960 api_server.go:269] stopped: https://192.168.61.101:8444/healthz: Get "https://192.168.61.101:8444/healthz": dial tcp 192.168.61.101:8444: connect: connection refused
	I1007 13:43:43.023271  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.500619  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.500651  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.500665  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.544628  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.544688  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.544701  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.643845  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:45.643890  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.023194  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.029635  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.029672  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.523339  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.528709  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.528745  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.023901  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.032151  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:47.032192  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.523593  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.531558  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:43:47.542161  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:43:47.542203  802960 api_server.go:131] duration metric: took 5.019126566s to wait for apiserver health ...
	I1007 13:43:47.542216  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:47.542227  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:47.544352  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:43:47.546075  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:43:47.560213  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:43:47.612380  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:43:47.633953  802960 system_pods.go:59] 8 kube-system pods found
	I1007 13:43:47.634015  802960 system_pods.go:61] "coredns-7c65d6cfc9-4nl8s" [798ab07d-53ab-45f3-9517-a3ea78152fc7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:43:47.634042  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [a3fd82bc-a9b5-4955-b3f8-d88c5bb5951d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:43:47.634058  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [431b750f-f9ca-4e27-a7db-6c758047acf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:43:47.634069  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [0289a6a2-f3b7-43fa-a97c-4464b93c2ecc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:43:47.634081  802960 system_pods.go:61] "kube-proxy-9s9p4" [8aeaf16d-764e-4da5-b27d-1915e33b3f2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 13:43:47.634102  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [4e5878d2-8ceb-4707-b2fd-834fd5f485be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 13:43:47.634114  802960 system_pods.go:61] "metrics-server-6867b74b74-s8v5f" [c498a0f1-ffb8-482d-b6be-ce04d3d6ff85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:43:47.634120  802960 system_pods.go:61] "storage-provisioner" [c7754b45-21b7-4a4e-b21a-11c5e9eae07d] Running
	I1007 13:43:47.634133  802960 system_pods.go:74] duration metric: took 21.726405ms to wait for pod list to return data ...
	I1007 13:43:47.634143  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:43:47.646482  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:43:47.646520  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:43:47.646534  802960 node_conditions.go:105] duration metric: took 12.386071ms to run NodePressure ...
	I1007 13:43:47.646556  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:48.002169  802960 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007151  802960 kubeadm.go:739] kubelet initialised
	I1007 13:43:48.007183  802960 kubeadm.go:740] duration metric: took 4.972433ms waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007211  802960 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:43:48.013961  802960 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:50.020725  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:52.020875  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:53.521602  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.521625  802960 pod_ready.go:82] duration metric: took 5.507628288s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.521637  802960 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529062  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.529090  802960 pod_ready.go:82] duration metric: took 7.446479ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529101  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:55.536129  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:58.036214  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:00.535183  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:02.035543  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.035567  802960 pod_ready.go:82] duration metric: took 8.506460378s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.035578  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040799  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.040823  802960 pod_ready.go:82] duration metric: took 5.237515ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040833  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045410  802960 pod_ready.go:93] pod "kube-proxy-9s9p4" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.045434  802960 pod_ready.go:82] duration metric: took 4.593822ms for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045444  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049665  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.049691  802960 pod_ready.go:82] duration metric: took 4.239058ms for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049701  802960 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:04.056407  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:06.062186  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:08.555372  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:10.556334  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:12.556423  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:14.557939  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:17.055829  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:19.056756  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:21.057049  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:23.058462  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:25.556545  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:27.556661  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:30.057123  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:32.057581  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:34.556797  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:37.055971  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:39.057054  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:41.057194  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:43.555532  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:45.556365  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:47.556508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:50.056070  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:52.056349  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:54.057809  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:56.556012  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:58.556338  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:00.558599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:03.058077  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:05.558375  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:07.558780  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:10.055494  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:12.057085  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:14.557752  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:17.056626  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:19.556724  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:22.057696  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:24.556552  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:27.056861  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:29.057505  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:31.555965  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:33.557729  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:35.557839  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:38.056814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:40.057838  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:42.058324  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:44.557202  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:47.056736  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:49.057871  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:51.556705  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:53.557023  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:55.557080  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:57.557599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:00.057399  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:02.057880  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:04.556689  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:06.557381  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:09.057237  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:11.057328  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:13.556210  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:15.556303  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:17.556994  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:19.557835  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:22.056480  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:24.556325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:26.556600  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:28.556639  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:30.556983  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:33.056142  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:35.057034  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:37.057246  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:39.556678  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:42.056900  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:44.057207  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:46.057325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:48.556417  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:51.056726  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:53.556598  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:55.557245  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:58.058116  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:00.059008  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:02.557074  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:05.056911  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:07.057374  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:09.556185  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:11.556584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:14.056433  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:16.056567  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:18.557584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:21.056484  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:23.056610  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:25.058105  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:27.555814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:29.556605  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:31.557226  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:34.057006  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.556126  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:38.556720  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:40.557339  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.055498  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:45.056400  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:47.056671  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:49.556490  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:52.056617  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:54.556079  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:56.556885  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:59.056725  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:01.560508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.050835  802960 pod_ready.go:82] duration metric: took 4m0.001111748s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	E1007 13:48:02.050883  802960 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:48:02.050910  802960 pod_ready.go:39] duration metric: took 4m14.0436862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:02.050947  802960 kubeadm.go:597] duration metric: took 4m23.217477497s to restartPrimaryControlPlane
	W1007 13:48:02.051112  802960 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:48:02.051179  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:48:28.304486  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.253272533s)
	I1007 13:48:28.304707  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:28.320794  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:48:28.332332  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:48:28.343070  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:48:28.343095  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:48:28.343157  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:48:28.354012  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:48:28.354118  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:48:28.364581  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:48:28.375492  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:48:28.375560  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:48:28.386761  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.396663  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:48:28.396728  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.407316  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:48:28.417872  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:48:28.417938  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:48:28.428569  802960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:48:28.476704  802960 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:48:28.476823  802960 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:48:28.590009  802960 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:48:28.590162  802960 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:48:28.590300  802960 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:48:28.600046  802960 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:48:28.602443  802960 out.go:235]   - Generating certificates and keys ...
	I1007 13:48:28.602559  802960 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:48:28.602623  802960 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:48:28.602711  802960 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:48:28.602790  802960 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:48:28.602884  802960 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:48:28.602931  802960 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:48:28.603008  802960 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:48:28.603118  802960 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:48:28.603256  802960 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:48:28.603372  802960 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:48:28.603429  802960 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:48:28.603498  802960 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:48:28.710739  802960 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:48:28.967010  802960 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:48:29.107742  802960 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:48:29.239779  802960 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:48:29.344572  802960 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:48:29.345301  802960 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:48:29.348025  802960 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:48:29.350415  802960 out.go:235]   - Booting up control plane ...
	I1007 13:48:29.350549  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:48:29.350650  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:48:29.350732  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:48:29.369742  802960 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:48:29.379251  802960 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:48:29.379337  802960 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:48:29.527857  802960 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:48:29.528013  802960 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:48:30.528609  802960 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001343456s
	I1007 13:48:30.528741  802960 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:48:35.532432  802960 kubeadm.go:310] [api-check] The API server is healthy after 5.003996251s
	I1007 13:48:35.548242  802960 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:48:35.569290  802960 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:48:35.607149  802960 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:48:35.607386  802960 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-489319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:48:35.623965  802960 kubeadm.go:310] [bootstrap-token] Using token: 5jqtrt.7avot15frjqa3f3n
	I1007 13:48:35.626327  802960 out.go:235]   - Configuring RBAC rules ...
	I1007 13:48:35.626469  802960 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:48:35.632447  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:48:35.644119  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:48:35.653482  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:48:35.659903  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:48:35.666151  802960 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:48:35.941468  802960 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:48:36.395332  802960 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:48:36.941654  802960 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:48:36.942749  802960 kubeadm.go:310] 
	I1007 13:48:36.942851  802960 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:48:36.942863  802960 kubeadm.go:310] 
	I1007 13:48:36.942955  802960 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:48:36.942966  802960 kubeadm.go:310] 
	I1007 13:48:36.942997  802960 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:48:36.943073  802960 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:48:36.943160  802960 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:48:36.943180  802960 kubeadm.go:310] 
	I1007 13:48:36.943247  802960 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:48:36.943254  802960 kubeadm.go:310] 
	I1007 13:48:36.943300  802960 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:48:36.943310  802960 kubeadm.go:310] 
	I1007 13:48:36.943379  802960 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:48:36.943477  802960 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:48:36.943559  802960 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:48:36.943567  802960 kubeadm.go:310] 
	I1007 13:48:36.943639  802960 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:48:36.943758  802960 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:48:36.943781  802960 kubeadm.go:310] 
	I1007 13:48:36.944023  802960 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944184  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:48:36.944212  802960 kubeadm.go:310] 	--control-plane 
	I1007 13:48:36.944225  802960 kubeadm.go:310] 
	I1007 13:48:36.944328  802960 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:48:36.944341  802960 kubeadm.go:310] 
	I1007 13:48:36.944441  802960 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944564  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:48:36.946569  802960 kubeadm.go:310] W1007 13:48:28.442953    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.946947  802960 kubeadm.go:310] W1007 13:48:28.444068    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.947056  802960 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:48:36.947089  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:48:36.947100  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:48:36.949279  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:48:36.951020  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:48:36.966261  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:48:36.991447  802960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:48:36.991537  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:36.991576  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-489319 minikube.k8s.io/updated_at=2024_10_07T13_48_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=default-k8s-diff-port-489319 minikube.k8s.io/primary=true
	I1007 13:48:37.245837  802960 ops.go:34] apiserver oom_adj: -16
	I1007 13:48:37.253690  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:37.754572  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.254294  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.754766  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.253915  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.754118  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.254526  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.753887  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.254082  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.441338  802960 kubeadm.go:1113] duration metric: took 4.449876263s to wait for elevateKubeSystemPrivileges
	I1007 13:48:41.441397  802960 kubeadm.go:394] duration metric: took 5m2.66370907s to StartCluster
	I1007 13:48:41.441446  802960 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.441564  802960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:48:41.443987  802960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.444365  802960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:48:41.444449  802960 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:48:41.444606  802960 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444633  802960 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444647  802960 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:48:41.444644  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:48:41.444669  802960 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444689  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444696  802960 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444748  802960 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444763  802960 addons.go:243] addon metrics-server should already be in state true
	I1007 13:48:41.444799  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444711  802960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-489319"
	I1007 13:48:41.445223  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445236  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445242  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445285  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445305  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445290  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.446533  802960 out.go:177] * Verifying Kubernetes components...
	I1007 13:48:41.448204  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:48:41.463351  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1007 13:48:41.463547  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I1007 13:48:41.464007  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464024  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464636  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464651  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464667  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.464674  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.465115  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465118  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465331  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.465770  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.465817  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.466630  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I1007 13:48:41.467414  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.468267  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.468293  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.468696  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.469177  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.469225  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.469939  802960 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.469967  802960 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:48:41.470004  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.470429  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.470491  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.485835  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I1007 13:48:41.485934  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I1007 13:48:41.486390  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I1007 13:48:41.486401  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486694  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486850  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.487029  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487048  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487286  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487314  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487375  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.487668  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487692  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487915  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.487940  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488170  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488207  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.488812  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.488866  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.490870  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.491026  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.493370  802960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:48:41.493369  802960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:48:41.495269  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:48:41.495304  802960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:48:41.495335  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.495482  802960 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.495504  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:48:41.495525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.499997  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500173  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500600  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500819  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.501010  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501125  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501279  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501286  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501657  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.501683  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.509460  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1007 13:48:41.510229  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.510898  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.510934  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.511328  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.511540  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.513219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.513712  802960 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.513734  802960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:48:41.513759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.517041  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517439  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.517462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517630  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.517885  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.518121  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.518301  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.674144  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:48:41.742749  802960 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753582  802960 node_ready.go:49] node "default-k8s-diff-port-489319" has status "Ready":"True"
	I1007 13:48:41.753616  802960 node_ready.go:38] duration metric: took 10.764539ms for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753630  802960 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:41.769510  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:41.796357  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.844420  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.871099  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:48:41.871126  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:48:41.978289  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:48:41.978325  802960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:48:42.063366  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.063399  802960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:48:42.204106  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.261831  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.261861  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.262168  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.262192  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.262202  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.262209  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.263023  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.263040  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.285756  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.285786  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.286112  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.286135  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.286145  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044454  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.199980665s)
	I1007 13:48:43.044515  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.044892  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.044910  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.044926  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044934  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044942  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.045192  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.045208  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.045193  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303372  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.099210402s)
	I1007 13:48:43.303432  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303452  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.303783  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.303801  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.303799  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303811  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303821  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.304077  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.304094  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.304107  802960 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-489319"
	I1007 13:48:43.306084  802960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1007 13:48:43.307478  802960 addons.go:510] duration metric: took 1.863046306s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1007 13:48:43.778309  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:45.778814  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:47.775390  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:47.775417  802960 pod_ready.go:82] duration metric: took 6.005863403s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:47.775431  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789544  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.789573  802960 pod_ready.go:82] duration metric: took 1.01413369s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789587  802960 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796239  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.796267  802960 pod_ready.go:82] duration metric: took 6.671875ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796280  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.806996  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.807030  802960 pod_ready.go:82] duration metric: took 10.740949ms for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.807046  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814301  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.814335  802960 pod_ready.go:82] duration metric: took 7.279716ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814350  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976171  802960 pod_ready.go:93] pod "kube-proxy-jpvx5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.976198  802960 pod_ready.go:82] duration metric: took 161.84042ms for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976209  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175024  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:50.175051  802960 pod_ready.go:82] duration metric: took 1.198834555s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175062  802960 pod_ready.go:39] duration metric: took 8.42141844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:50.175094  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:48:50.175154  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:48:50.190906  802960 api_server.go:72] duration metric: took 8.746497817s to wait for apiserver process to appear ...
	I1007 13:48:50.190937  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:48:50.190969  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:48:50.196727  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:48:50.197751  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:48:50.197774  802960 api_server.go:131] duration metric: took 6.829939ms to wait for apiserver health ...
	I1007 13:48:50.197783  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:48:50.378985  802960 system_pods.go:59] 9 kube-system pods found
	I1007 13:48:50.379015  802960 system_pods.go:61] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.379023  802960 system_pods.go:61] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.379029  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.379034  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.379041  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.379045  802960 system_pods.go:61] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.379050  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.379059  802960 system_pods.go:61] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.379066  802960 system_pods.go:61] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.379078  802960 system_pods.go:74] duration metric: took 181.288145ms to wait for pod list to return data ...
	I1007 13:48:50.379091  802960 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:48:50.574098  802960 default_sa.go:45] found service account: "default"
	I1007 13:48:50.574127  802960 default_sa.go:55] duration metric: took 195.025343ms for default service account to be created ...
	I1007 13:48:50.574137  802960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:48:50.777201  802960 system_pods.go:86] 9 kube-system pods found
	I1007 13:48:50.777233  802960 system_pods.go:89] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.777238  802960 system_pods.go:89] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.777243  802960 system_pods.go:89] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.777247  802960 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.777252  802960 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.777257  802960 system_pods.go:89] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.777260  802960 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.777269  802960 system_pods.go:89] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.777273  802960 system_pods.go:89] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.777283  802960 system_pods.go:126] duration metric: took 203.138905ms to wait for k8s-apps to be running ...
	I1007 13:48:50.777292  802960 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:48:50.777338  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:50.794312  802960 system_svc.go:56] duration metric: took 17.00771ms WaitForService to wait for kubelet
	I1007 13:48:50.794350  802960 kubeadm.go:582] duration metric: took 9.349947078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:48:50.794376  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:48:50.974457  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:48:50.974484  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:48:50.974507  802960 node_conditions.go:105] duration metric: took 180.125373ms to run NodePressure ...
	I1007 13:48:50.974520  802960 start.go:241] waiting for startup goroutines ...
	I1007 13:48:50.974526  802960 start.go:246] waiting for cluster config update ...
	I1007 13:48:50.974537  802960 start.go:255] writing updated cluster config ...
	I1007 13:48:50.974827  802960 ssh_runner.go:195] Run: rm -f paused
	I1007 13:48:51.030094  802960 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:48:51.032736  802960 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-489319" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.223299664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308953223273093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f28e496-ee63-4e40-a42e-7658886366af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.224043253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99799d27-c937-49dd-9afd-db9032770910 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.224115685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99799d27-c937-49dd-9afd-db9032770910 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.224303232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99799d27-c937-49dd-9afd-db9032770910 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.262436029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc50cf7b-e20a-45a1-89ba-49908c414f00 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.262592298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc50cf7b-e20a-45a1-89ba-49908c414f00 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.264167791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8e6f759-9c71-4000-a1e8-f04b2fbd154c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.264672604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308953264648358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8e6f759-9c71-4000-a1e8-f04b2fbd154c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.265222591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a6e436c-3770-4cc5-a3d9-fb4edeb65e4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.265273795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a6e436c-3770-4cc5-a3d9-fb4edeb65e4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.265478349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a6e436c-3770-4cc5-a3d9-fb4edeb65e4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.307158235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa4cbdbc-5662-4253-b8ce-017d39bf15cd name=/runtime.v1.RuntimeService/Version
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.307249255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa4cbdbc-5662-4253-b8ce-017d39bf15cd name=/runtime.v1.RuntimeService/Version
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.308754274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bfc216a-61ec-4132-9b40-a4d04d1200f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.309151077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308953309129469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bfc216a-61ec-4132-9b40-a4d04d1200f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.309841215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28b190e7-ac62-44f9-b47c-89a662051492 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.309918607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28b190e7-ac62-44f9-b47c-89a662051492 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.310119431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28b190e7-ac62-44f9-b47c-89a662051492 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.347691027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97762c9e-b0ee-4170-81d5-5211bc115890 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.347832944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97762c9e-b0ee-4170-81d5-5211bc115890 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.350114790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50b756ee-b481-47a2-8f98-0a532af6e027 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.350890561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308953350506717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50b756ee-b481-47a2-8f98-0a532af6e027 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.351596758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72d78831-5382-4d0c-a3e6-d4d00d0662da name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.351651507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72d78831-5382-4d0c-a3e6-d4d00d0662da name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:49:13 embed-certs-653322 crio[717]: time="2024-10-07 13:49:13.351837841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72d78831-5382-4d0c-a3e6-d4d00d0662da name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be7d2d18111c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   77414d1be7867       storage-provisioner
	185ad082fff4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ed8079b4091f3       coredns-7c65d6cfc9-hrbbb
	ca57da92b1670       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   805bb6668884d       coredns-7c65d6cfc9-l6vfj
	c31cc1272200e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   8654b316558d4       kube-proxy-z9r92
	11d29174badb8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   c73b61ac974dc       kube-controller-manager-embed-certs-653322
	cd446f798df71       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f0f0577dcaa4a       etcd-embed-certs-653322
	380f59263feb5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   530eb8a8a4a35       kube-apiserver-embed-certs-653322
	872a29822cdb8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   b72035e7d994d       kube-scheduler-embed-certs-653322
	d0e1e406683eb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   b6f94a2563f83       kube-apiserver-embed-certs-653322
	
	
	==> coredns [185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-653322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-653322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=embed-certs-653322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:39:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-653322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:49:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:45:14 +0000   Mon, 07 Oct 2024 13:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:45:14 +0000   Mon, 07 Oct 2024 13:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:45:14 +0000   Mon, 07 Oct 2024 13:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:45:14 +0000   Mon, 07 Oct 2024 13:39:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    embed-certs-653322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e959c63f733947bf8e1b2bfbe717544c
	  System UUID:                e959c63f-7339-47bf-8e1b-2bfbe717544c
	  Boot ID:                    afa69290-cc98-4651-a690-b6a53a47693c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-hrbbb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-l6vfj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-653322                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-653322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-653322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-z9r92                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-embed-certs-653322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-xwpbg               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node embed-certs-653322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node embed-certs-653322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node embed-certs-653322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node embed-certs-653322 event: Registered Node embed-certs-653322 in Controller
	
	
	==> dmesg <==
	[  +4.859726] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.648010] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.402170] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.969218] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.062446] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066469] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.196067] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.162161] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.302732] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[Oct 7 13:35] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
	[  +0.066872] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.941183] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +5.554262] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.053962] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.476708] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 7 13:39] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.322429] systemd-fstab-generator[2601]: Ignoring "noauto" option for root device
	[  +0.063737] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.005585] systemd-fstab-generator[2921]: Ignoring "noauto" option for root device
	[  +0.098926] kauditd_printk_skb: 54 callbacks suppressed
	[Oct 7 13:40] systemd-fstab-generator[3051]: Ignoring "noauto" option for root device
	[  +0.123031] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.086878] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20] <==
	{"level":"info","ts":"2024-10-07T13:39:54.063722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T13:39:54.063727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d received MsgVoteResp from e5487579cc149d4d at term 2"}
	{"level":"info","ts":"2024-10-07T13:39:54.063738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became leader at term 2"}
	{"level":"info","ts":"2024-10-07T13:39:54.063756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5487579cc149d4d elected leader e5487579cc149d4d at term 2"}
	{"level":"info","ts":"2024-10-07T13:39:54.066764Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e5487579cc149d4d","local-member-attributes":"{Name:embed-certs-653322 ClientURLs:[https://192.168.50.36:2379]}","request-path":"/0/members/e5487579cc149d4d/attributes","cluster-id":"31bd1a1c1ff06930","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T13:39:54.066824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:39:54.067167Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:39:54.069776Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:39:54.071601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T13:39:54.071646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T13:39:54.072284Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T13:39:54.073209Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.36:2379"}
	{"level":"info","ts":"2024-10-07T13:39:54.080030Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:39:54.080142Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:39:54.081096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T13:39:54.089612Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:39:54.090481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T13:43:39.668812Z","caller":"traceutil/trace.go:171","msg":"trace[104601373] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"362.452376ms","start":"2024-10-07T13:43:39.306323Z","end":"2024-10-07T13:43:39.668776Z","steps":["trace[104601373] 'process raft request'  (duration: 362.276178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:43:39.671387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:43:39.306297Z","time spent":"364.130054ms","remote":"127.0.0.1:44594","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:628 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-07T13:43:40.278207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.158707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:43:40.278416Z","caller":"traceutil/trace.go:171","msg":"trace[675377128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:630; }","duration":"207.527377ms","start":"2024-10-07T13:43:40.070872Z","end":"2024-10-07T13:43:40.278399Z","steps":["trace[675377128] 'range keys from in-memory index tree'  (duration: 207.06105ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:43:40.278371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.417016ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:43:40.278831Z","caller":"traceutil/trace.go:171","msg":"trace[1695370000] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:630; }","duration":"298.878083ms","start":"2024-10-07T13:43:39.979941Z","end":"2024-10-07T13:43:40.278819Z","steps":["trace[1695370000] 'range keys from in-memory index tree'  (duration: 298.366835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:43:41.031815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.593463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-07T13:43:41.032060Z","caller":"traceutil/trace.go:171","msg":"trace[1632186432] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:631; }","duration":"110.865419ms","start":"2024-10-07T13:43:40.921181Z","end":"2024-10-07T13:43:41.032046Z","steps":["trace[1632186432] 'count revisions from in-memory index tree'  (duration: 110.546301ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:49:13 up 14 min,  0 users,  load average: 0.05, 0.18, 0.19
	Linux embed-certs-653322 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1007 13:44:56.980571       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:44:56.980673       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1007 13:44:56.981700       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:44:56.981829       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:45:56.981977       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:45:56.982120       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1007 13:45:56.982006       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:45:56.982193       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:45:56.983511       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:45:56.983651       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:47:56.984220       1 handler_proxy.go:99] no RequestInfo found in the context
	W1007 13:47:56.984274       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:47:56.984320       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1007 13:47:56.984377       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:47:56.985459       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:47:56.985484       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95] <==
	W1007 13:39:49.101692       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.311981       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.442204       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.470247       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.474885       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.612059       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.967993       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.021663       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.023062       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.089089       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.090310       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.119756       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.136940       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.146834       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.148186       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.212750       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.214271       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.219940       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.281200       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.300075       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.389841       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.508469       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.547486       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.560336       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.560841       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba] <==
	E1007 13:44:03.010017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:44:03.452387       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:44:33.017652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:44:33.460329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:45:03.024785       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:45:03.470334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:45:14.372280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-653322"
	E1007 13:45:33.031084       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:45:33.479082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:45:48.739320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="320.342µs"
	I1007 13:46:01.734827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="270.613µs"
	E1007 13:46:03.040400       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:46:03.488940       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:46:33.047851       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:46:33.499044       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:47:03.054163       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:47:03.508854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:47:33.061388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:47:33.517630       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:48:03.071249       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:48:03.526366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:48:33.079282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:48:33.535994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:49:03.085868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:49:03.543986       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 13:40:04.849759       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 13:40:04.868063       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.36"]
	E1007 13:40:04.868136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:40:05.018052       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 13:40:05.018126       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 13:40:05.018174       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:40:05.036268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:40:05.036618       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:40:05.036637       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:40:05.048130       1 config.go:199] "Starting service config controller"
	I1007 13:40:05.048254       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:40:05.048363       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:40:05.048382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:40:05.055483       1 config.go:328] "Starting node config controller"
	I1007 13:40:05.055499       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:40:05.149110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:40:05.149227       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:40:05.155631       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7] <==
	W1007 13:39:56.950130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:39:56.950253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.029070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:39:57.029299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.053166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1007 13:39:57.053855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:39:57.053968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1007 13:39:57.054164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.082123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:39:57.082356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.106425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 13:39:57.106766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.112511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 13:39:57.112858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.145084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:39:57.145119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.153145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:39:57.153277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.304594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:39:57.304647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.327503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:39:57.327788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.623069       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:39:57.623344       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 13:40:00.812867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:48:02 embed-certs-653322 kubelet[2928]: E1007 13:48:02.718068    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:48:08 embed-certs-653322 kubelet[2928]: E1007 13:48:08.931304    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308888930735808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:08 embed-certs-653322 kubelet[2928]: E1007 13:48:08.931773    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308888930735808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:16 embed-certs-653322 kubelet[2928]: E1007 13:48:16.717752    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:48:18 embed-certs-653322 kubelet[2928]: E1007 13:48:18.934284    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308898933776066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:18 embed-certs-653322 kubelet[2928]: E1007 13:48:18.934751    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308898933776066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:28 embed-certs-653322 kubelet[2928]: E1007 13:48:28.937962    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308908937188488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:28 embed-certs-653322 kubelet[2928]: E1007 13:48:28.938440    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308908937188488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:29 embed-certs-653322 kubelet[2928]: E1007 13:48:29.717913    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:48:38 embed-certs-653322 kubelet[2928]: E1007 13:48:38.941006    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308918940646664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:38 embed-certs-653322 kubelet[2928]: E1007 13:48:38.941468    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308918940646664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:41 embed-certs-653322 kubelet[2928]: E1007 13:48:41.717775    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:48:48 embed-certs-653322 kubelet[2928]: E1007 13:48:48.944192    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308928943682416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:48 embed-certs-653322 kubelet[2928]: E1007 13:48:48.944673    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308928943682416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:54 embed-certs-653322 kubelet[2928]: E1007 13:48:54.716611    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]: E1007 13:48:58.757012    2928 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]: E1007 13:48:58.946811    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308938946277951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:48:58 embed-certs-653322 kubelet[2928]: E1007 13:48:58.946862    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308938946277951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:06 embed-certs-653322 kubelet[2928]: E1007 13:49:06.718458    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:49:08 embed-certs-653322 kubelet[2928]: E1007 13:49:08.949999    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308948949172160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:08 embed-certs-653322 kubelet[2928]: E1007 13:49:08.950404    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308948949172160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91] <==
	I1007 13:40:06.061988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:40:06.071929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:40:06.071992       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:40:06.084143       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:40:06.084351       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-653322_9c10e6a5-50e4-4984-8a78-8f6539487460!
	I1007 13:40:06.084323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7e02454-542f-4e93-af4e-1feee42a6375", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-653322_9c10e6a5-50e4-4984-8a78-8f6539487460 became leader
	I1007 13:40:06.185594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-653322_9c10e6a5-50e4-4984-8a78-8f6539487460!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-653322 -n embed-certs-653322
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-653322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xwpbg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-653322 describe pod metrics-server-6867b74b74-xwpbg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-653322 describe pod metrics-server-6867b74b74-xwpbg: exit status 1 (68.125219ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xwpbg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-653322 describe pod metrics-server-6867b74b74-xwpbg: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016701 -n no-preload-016701
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-07 13:50:25.193962853 +0000 UTC m=+6158.392502831
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-016701 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-016701 logs -n 25: (1.482623997s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:26 UTC |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-016701             | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-653322            | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-120978        | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC | 07 Oct 24 13:48 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:38:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:38:32.108474  802960 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:38:32.108648  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108659  802960 out.go:358] Setting ErrFile to fd 2...
	I1007 13:38:32.108665  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108864  802960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:38:32.109477  802960 out.go:352] Setting JSON to false
	I1007 13:38:32.110672  802960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12061,"bootTime":1728296251,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:38:32.110773  802960 start.go:139] virtualization: kvm guest
	I1007 13:38:32.113566  802960 out.go:177] * [default-k8s-diff-port-489319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:38:32.115580  802960 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:38:32.115627  802960 notify.go:220] Checking for updates...
	I1007 13:38:32.118464  802960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:38:32.120173  802960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:38:32.121799  802960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:38:32.123382  802960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:38:32.125020  802960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:38:29.209336  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:31.212514  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:32.126861  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:38:32.127255  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.127337  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.143671  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1007 13:38:32.144158  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.144820  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.144844  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.145206  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.145416  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.145655  802960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:38:32.146010  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.146112  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.161508  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1007 13:38:32.162082  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.162517  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.162541  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.162886  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.163112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.200281  802960 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:38:32.201380  802960 start.go:297] selected driver: kvm2
	I1007 13:38:32.201393  802960 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.201499  802960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:38:32.202260  802960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.202353  802960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:38:32.218742  802960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:38:32.219129  802960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:38:32.219168  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:38:32.219221  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:38:32.219254  802960 start.go:340] cluster config:
	{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.219380  802960 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.222273  802960 out.go:177] * Starting "default-k8s-diff-port-489319" primary control-plane node in "default-k8s-diff-port-489319" cluster
	I1007 13:38:32.223750  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:38:32.223801  802960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:38:32.223816  802960 cache.go:56] Caching tarball of preloaded images
	I1007 13:38:32.223891  802960 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:38:32.223901  802960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:38:32.223997  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:38:32.224208  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:38:32.224280  802960 start.go:364] duration metric: took 38.73µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:38:32.224297  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:38:32.224303  802960 fix.go:54] fixHost starting: 
	I1007 13:38:32.224637  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.224674  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.239368  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I1007 13:38:32.239869  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.240386  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.240409  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.240813  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.241063  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.241228  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:38:32.243196  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Running err=<nil>
	W1007 13:38:32.243217  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:38:32.245881  802960 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-489319" VM ...
	I1007 13:38:30.514797  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:33.014487  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:30.891736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:30.891810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:30.926900  800812 cri.go:89] found id: ""
	I1007 13:38:30.926934  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.926945  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:30.926953  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:30.927020  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:30.962704  800812 cri.go:89] found id: ""
	I1007 13:38:30.962742  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.962760  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:30.962769  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:30.962839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:31.000947  800812 cri.go:89] found id: ""
	I1007 13:38:31.000986  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.000999  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:31.001009  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:31.001079  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:31.040687  800812 cri.go:89] found id: ""
	I1007 13:38:31.040734  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.040743  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:31.040750  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:31.040808  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:31.077841  800812 cri.go:89] found id: ""
	I1007 13:38:31.077872  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.077891  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:31.077900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:31.077975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:31.128590  800812 cri.go:89] found id: ""
	I1007 13:38:31.128625  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.128638  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:31.128736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:31.128947  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:31.170110  800812 cri.go:89] found id: ""
	I1007 13:38:31.170140  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.170149  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:31.170157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:31.170231  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:31.229262  800812 cri.go:89] found id: ""
	I1007 13:38:31.229297  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.229310  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:31.229327  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:31.229343  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:31.281680  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:31.281727  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:31.296076  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:31.296111  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:31.367443  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:31.367468  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:31.367488  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:31.449882  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:31.449933  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:33.993958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:34.007064  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:34.007150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:34.043479  800812 cri.go:89] found id: ""
	I1007 13:38:34.043517  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.043529  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:34.043537  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:34.043609  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:34.080953  800812 cri.go:89] found id: ""
	I1007 13:38:34.081006  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.081019  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:34.081028  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:34.081100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:34.117708  800812 cri.go:89] found id: ""
	I1007 13:38:34.117741  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.117749  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:34.117756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:34.117823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:34.154457  800812 cri.go:89] found id: ""
	I1007 13:38:34.154487  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.154499  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:34.154507  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:34.154586  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:34.192037  800812 cri.go:89] found id: ""
	I1007 13:38:34.192070  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.192080  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:34.192088  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:34.192159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:34.230404  800812 cri.go:89] found id: ""
	I1007 13:38:34.230441  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.230453  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:34.230461  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:34.230529  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:34.266650  800812 cri.go:89] found id: ""
	I1007 13:38:34.266712  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.266726  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:34.266736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:34.266832  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:34.302731  800812 cri.go:89] found id: ""
	I1007 13:38:34.302767  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.302784  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:34.302807  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:34.302828  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:34.377367  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:34.377400  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:34.377417  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:34.453185  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:34.453232  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:34.498235  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:34.498269  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:34.548177  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:34.548224  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:32.247486  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:38:32.247524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.247949  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:38:32.250961  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:38:32.251539  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251823  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:38:32.252088  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252375  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:38:32.252944  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:38:32.253182  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:38:32.253197  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:38:35.122367  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:33.709093  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.709691  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.514611  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:38.014557  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:37.065875  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:37.079772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:37.079868  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:37.115654  800812 cri.go:89] found id: ""
	I1007 13:38:37.115685  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.115696  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:37.115709  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:37.115777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:37.156963  800812 cri.go:89] found id: ""
	I1007 13:38:37.157001  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.157013  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:37.157022  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:37.157080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:37.199210  800812 cri.go:89] found id: ""
	I1007 13:38:37.199243  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.199254  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:37.199263  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:37.199336  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:37.240823  800812 cri.go:89] found id: ""
	I1007 13:38:37.240868  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.240880  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:37.240889  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:37.240958  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:37.289164  800812 cri.go:89] found id: ""
	I1007 13:38:37.289192  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.289202  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:37.289210  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:37.289276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:37.330630  800812 cri.go:89] found id: ""
	I1007 13:38:37.330660  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.330669  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:37.330675  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:37.330731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:37.372401  800812 cri.go:89] found id: ""
	I1007 13:38:37.372431  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.372439  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:37.372446  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:37.372500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:37.413585  800812 cri.go:89] found id: ""
	I1007 13:38:37.413617  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.413625  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:37.413634  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:37.413646  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:37.458433  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:37.458471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:37.512720  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:37.512769  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.527774  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:37.527813  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:37.605381  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:37.605408  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:37.605422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.182809  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:40.196597  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:40.196671  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:40.236687  800812 cri.go:89] found id: ""
	I1007 13:38:40.236726  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.236738  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:40.236746  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:40.236814  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:40.271432  800812 cri.go:89] found id: ""
	I1007 13:38:40.271470  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.271479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:40.271485  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:40.271548  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:40.308972  800812 cri.go:89] found id: ""
	I1007 13:38:40.309014  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.309026  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:40.309044  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:40.309115  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:40.345363  800812 cri.go:89] found id: ""
	I1007 13:38:40.345404  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.345415  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:40.345424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:40.345506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:40.378426  800812 cri.go:89] found id: ""
	I1007 13:38:40.378457  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.378465  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:40.378471  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:40.378525  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:40.415312  800812 cri.go:89] found id: ""
	I1007 13:38:40.415349  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.415370  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:40.415379  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:40.415448  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:40.452679  800812 cri.go:89] found id: ""
	I1007 13:38:40.452715  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.452727  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:40.452735  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:40.452810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:40.490328  800812 cri.go:89] found id: ""
	I1007 13:38:40.490362  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.490371  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:40.490382  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:40.490395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.581489  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:40.581551  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:40.626827  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:40.626865  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:40.680180  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:40.680226  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:40.696284  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:40.696316  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:40.777722  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:38.198306  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:37.710573  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.210415  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.516522  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.013328  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.278317  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:43.292099  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:43.292180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:43.329487  800812 cri.go:89] found id: ""
	I1007 13:38:43.329518  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.329527  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:43.329534  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:43.329593  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:43.367622  800812 cri.go:89] found id: ""
	I1007 13:38:43.367653  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.367665  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:43.367674  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:43.367750  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:43.403439  800812 cri.go:89] found id: ""
	I1007 13:38:43.403477  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.403491  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:43.403499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:43.403577  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:43.442974  800812 cri.go:89] found id: ""
	I1007 13:38:43.443019  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.443029  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:43.443037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:43.443102  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:43.479975  800812 cri.go:89] found id: ""
	I1007 13:38:43.480005  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.480013  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:43.480020  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:43.480091  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:43.521645  800812 cri.go:89] found id: ""
	I1007 13:38:43.521679  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.521695  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:43.521704  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:43.521763  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:43.558574  800812 cri.go:89] found id: ""
	I1007 13:38:43.558605  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.558614  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:43.558620  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:43.558687  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:43.594054  800812 cri.go:89] found id: ""
	I1007 13:38:43.594086  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.594097  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:43.594111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:43.594128  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:43.673587  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:43.673634  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:43.717642  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:43.717673  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:43.771524  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:43.771586  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:43.786726  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:43.786764  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:43.858645  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:44.274468  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:42.709396  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:44.709744  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.711052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:45.015094  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:47.513659  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:49.515994  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.359453  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:46.373401  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:46.373490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:46.414387  800812 cri.go:89] found id: ""
	I1007 13:38:46.414416  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.414425  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:46.414432  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:46.414498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:46.451704  800812 cri.go:89] found id: ""
	I1007 13:38:46.451739  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.451751  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:46.451761  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:46.451822  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:46.487607  800812 cri.go:89] found id: ""
	I1007 13:38:46.487646  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.487657  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:46.487666  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:46.487747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:46.527080  800812 cri.go:89] found id: ""
	I1007 13:38:46.527113  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.527121  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:46.527128  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:46.527182  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:46.565979  800812 cri.go:89] found id: ""
	I1007 13:38:46.566007  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.566016  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:46.566037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:46.566095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:46.604631  800812 cri.go:89] found id: ""
	I1007 13:38:46.604665  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.604674  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:46.604683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:46.604751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:46.643618  800812 cri.go:89] found id: ""
	I1007 13:38:46.643649  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.643660  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:46.643669  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:46.643741  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:46.686777  800812 cri.go:89] found id: ""
	I1007 13:38:46.686812  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.686824  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:46.686837  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:46.686853  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:46.769689  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:46.769749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:46.810903  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:46.810934  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:46.859958  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:46.860007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:46.874867  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:46.874902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:46.945267  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.446436  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:49.460403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:49.460493  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:49.498234  800812 cri.go:89] found id: ""
	I1007 13:38:49.498278  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.498290  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:49.498302  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:49.498376  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:49.539337  800812 cri.go:89] found id: ""
	I1007 13:38:49.539374  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.539386  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:49.539395  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:49.539465  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:49.580365  800812 cri.go:89] found id: ""
	I1007 13:38:49.580404  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.580415  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:49.580424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:49.580498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:49.624591  800812 cri.go:89] found id: ""
	I1007 13:38:49.624627  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.624638  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:49.624652  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:49.624726  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:49.661718  800812 cri.go:89] found id: ""
	I1007 13:38:49.661750  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.661762  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:49.661776  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:49.661846  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:49.698356  800812 cri.go:89] found id: ""
	I1007 13:38:49.698389  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.698402  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:49.698410  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:49.698477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:49.735453  800812 cri.go:89] found id: ""
	I1007 13:38:49.735486  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.735497  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:49.735505  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:49.735578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:49.779530  800812 cri.go:89] found id: ""
	I1007 13:38:49.779558  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.779567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:49.779577  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:49.779593  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:49.794020  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:49.794067  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:49.868060  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.868093  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:49.868110  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:49.946554  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:49.946599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:49.990212  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:49.990251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:47.346399  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:49.208303  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:51.209295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.013939  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:54.514863  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.543303  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:52.559466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:52.559535  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:52.601977  800812 cri.go:89] found id: ""
	I1007 13:38:52.602008  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.602018  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:52.602043  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:52.602104  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:52.640954  800812 cri.go:89] found id: ""
	I1007 13:38:52.640985  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.641005  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:52.641012  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:52.641067  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:52.682075  800812 cri.go:89] found id: ""
	I1007 13:38:52.682105  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.682113  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:52.682119  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:52.682184  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:52.722957  800812 cri.go:89] found id: ""
	I1007 13:38:52.722986  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.722994  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:52.723006  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:52.723062  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:52.764074  800812 cri.go:89] found id: ""
	I1007 13:38:52.764110  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.764122  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:52.764131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:52.764210  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:52.805802  800812 cri.go:89] found id: ""
	I1007 13:38:52.805830  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.805838  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:52.805844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:52.805912  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:52.846116  800812 cri.go:89] found id: ""
	I1007 13:38:52.846148  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.846157  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:52.846164  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:52.846226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:52.888666  800812 cri.go:89] found id: ""
	I1007 13:38:52.888703  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.888719  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:52.888733  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:52.888750  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:52.968131  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:52.968177  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:53.012585  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:53.012624  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:53.066638  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:53.066692  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:53.081227  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:53.081264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:53.156955  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:55.657820  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:55.672261  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:55.672349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:55.713096  800812 cri.go:89] found id: ""
	I1007 13:38:55.713124  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.713135  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:55.713143  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:55.713211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:55.748413  800812 cri.go:89] found id: ""
	I1007 13:38:55.748447  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.748457  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:55.748465  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:55.748534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:55.781376  800812 cri.go:89] found id: ""
	I1007 13:38:55.781412  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.781424  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:55.781433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:55.781502  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:55.817653  800812 cri.go:89] found id: ""
	I1007 13:38:55.817681  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.817690  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:55.817697  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:55.817767  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:55.853133  800812 cri.go:89] found id: ""
	I1007 13:38:55.853166  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.853177  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:55.853185  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:55.853255  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:53.426353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:56.498332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:53.709052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.710245  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:57.014521  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:59.020215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.891659  800812 cri.go:89] found id: ""
	I1007 13:38:55.891691  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.891720  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:55.891730  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:55.891794  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:55.929345  800812 cri.go:89] found id: ""
	I1007 13:38:55.929373  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.929381  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:55.929388  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:55.929461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:55.963379  800812 cri.go:89] found id: ""
	I1007 13:38:55.963410  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.963419  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:55.963428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:55.963444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:56.006795  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:56.006837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:56.060896  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:56.060942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:56.076353  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:56.076394  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:56.157464  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:56.157492  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:56.157510  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.747936  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:58.761415  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:58.761489  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:58.795181  800812 cri.go:89] found id: ""
	I1007 13:38:58.795216  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.795226  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:58.795232  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:58.795291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:58.828749  800812 cri.go:89] found id: ""
	I1007 13:38:58.828785  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.828795  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:58.828802  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:58.828865  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:58.867195  800812 cri.go:89] found id: ""
	I1007 13:38:58.867234  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.867243  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:58.867251  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:58.867311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:58.905348  800812 cri.go:89] found id: ""
	I1007 13:38:58.905387  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.905398  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:58.905407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:58.905477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:58.940553  800812 cri.go:89] found id: ""
	I1007 13:38:58.940626  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.940655  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:58.940667  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:58.940751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:58.976595  800812 cri.go:89] found id: ""
	I1007 13:38:58.976643  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.976652  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:58.976662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:58.976719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:59.014478  800812 cri.go:89] found id: ""
	I1007 13:38:59.014512  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.014521  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:59.014527  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:59.014594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:59.051337  800812 cri.go:89] found id: ""
	I1007 13:38:59.051367  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.051378  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:59.051391  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:59.051408  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:59.091689  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:59.091733  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:59.144431  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:59.144477  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:59.159436  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:59.159471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:59.256248  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:59.256277  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:59.256293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.208916  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:00.210007  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.514807  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:04.015032  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.846247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:01.861309  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:01.861389  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:01.898079  800812 cri.go:89] found id: ""
	I1007 13:39:01.898117  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.898129  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:01.898138  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:01.898211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:01.933905  800812 cri.go:89] found id: ""
	I1007 13:39:01.933940  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.933951  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:01.933960  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:01.934056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:01.970522  800812 cri.go:89] found id: ""
	I1007 13:39:01.970552  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.970563  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:01.970580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:01.970653  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:02.014210  800812 cri.go:89] found id: ""
	I1007 13:39:02.014245  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.014257  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:02.014265  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:02.014329  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:02.052014  800812 cri.go:89] found id: ""
	I1007 13:39:02.052053  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.052065  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:02.052073  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:02.052144  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:02.089966  800812 cri.go:89] found id: ""
	I1007 13:39:02.089998  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.090007  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:02.090014  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:02.090105  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:02.125933  800812 cri.go:89] found id: ""
	I1007 13:39:02.125970  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.125982  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:02.125991  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:02.126092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:02.163348  800812 cri.go:89] found id: ""
	I1007 13:39:02.163381  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.163394  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:02.163405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:02.163422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:02.218311  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:02.218351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:02.233345  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:02.233381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:02.308402  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:02.308425  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:02.308444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:02.387161  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:02.387207  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:04.931535  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:04.954002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:04.954100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:04.994745  800812 cri.go:89] found id: ""
	I1007 13:39:04.994783  800812 logs.go:282] 0 containers: []
	W1007 13:39:04.994795  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:04.994803  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:04.994903  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:05.031041  800812 cri.go:89] found id: ""
	I1007 13:39:05.031070  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.031078  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:05.031085  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:05.031157  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:05.075737  800812 cri.go:89] found id: ""
	I1007 13:39:05.075780  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.075788  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:05.075794  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:05.075849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:05.108984  800812 cri.go:89] found id: ""
	I1007 13:39:05.109019  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.109030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:05.109038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:05.109096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:05.145667  800812 cri.go:89] found id: ""
	I1007 13:39:05.145699  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.145707  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:05.145724  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:05.145780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:05.182742  800812 cri.go:89] found id: ""
	I1007 13:39:05.182772  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.182783  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:05.182791  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:05.182859  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:05.223674  800812 cri.go:89] found id: ""
	I1007 13:39:05.223721  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.223731  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:05.223737  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:05.223802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:05.263516  800812 cri.go:89] found id: ""
	I1007 13:39:05.263555  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.263567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:05.263581  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:05.263599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:05.345447  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:05.345493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:05.386599  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:05.386635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:05.439367  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:05.439410  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:05.455636  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:05.455671  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:05.541166  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:05.618355  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:02.709614  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:05.211295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:06.514215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.515091  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.041406  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:08.056425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:08.056514  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:08.094066  800812 cri.go:89] found id: ""
	I1007 13:39:08.094098  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.094106  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:08.094113  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:08.094180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:08.136241  800812 cri.go:89] found id: ""
	I1007 13:39:08.136277  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.136289  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:08.136297  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:08.136368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:08.176917  800812 cri.go:89] found id: ""
	I1007 13:39:08.176949  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.176958  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:08.176964  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:08.177019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:08.215278  800812 cri.go:89] found id: ""
	I1007 13:39:08.215313  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.215324  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:08.215331  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:08.215386  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:08.256965  800812 cri.go:89] found id: ""
	I1007 13:39:08.257002  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.257014  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:08.257023  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:08.257093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:08.294680  800812 cri.go:89] found id: ""
	I1007 13:39:08.294716  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.294726  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:08.294736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:08.294792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:08.332832  800812 cri.go:89] found id: ""
	I1007 13:39:08.332862  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.332871  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:08.332878  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:08.332931  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:08.369893  800812 cri.go:89] found id: ""
	I1007 13:39:08.369927  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.369939  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:08.369960  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:08.369987  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:08.448286  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:08.448337  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:08.493839  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:08.493877  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:08.549319  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:08.549365  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:08.564175  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:08.564211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:08.636651  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:08.690293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:07.709699  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:10.208983  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.014066  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:13.014936  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.137682  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:11.152844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:11.152934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:11.187265  800812 cri.go:89] found id: ""
	I1007 13:39:11.187301  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.187313  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:11.187322  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:11.187384  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:11.222721  800812 cri.go:89] found id: ""
	I1007 13:39:11.222760  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.222776  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:11.222783  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:11.222842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:11.261731  800812 cri.go:89] found id: ""
	I1007 13:39:11.261765  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.261774  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:11.261781  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:11.261841  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:11.299511  800812 cri.go:89] found id: ""
	I1007 13:39:11.299541  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.299556  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:11.299563  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:11.299615  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:11.338737  800812 cri.go:89] found id: ""
	I1007 13:39:11.338776  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.338787  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:11.338793  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:11.338851  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:11.382231  800812 cri.go:89] found id: ""
	I1007 13:39:11.382267  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.382277  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:11.382284  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:11.382344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:11.436147  800812 cri.go:89] found id: ""
	I1007 13:39:11.436179  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.436188  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:11.436195  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:11.436258  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:11.477332  800812 cri.go:89] found id: ""
	I1007 13:39:11.477367  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.477380  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:11.477392  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:11.477411  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:11.531842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:11.531887  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:11.546074  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:11.546103  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:11.617435  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.617455  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:11.617470  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:11.703173  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:11.703227  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.249507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:14.263655  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:14.263740  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:14.300339  800812 cri.go:89] found id: ""
	I1007 13:39:14.300372  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.300381  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:14.300388  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:14.300441  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:14.338791  800812 cri.go:89] found id: ""
	I1007 13:39:14.338836  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.338849  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:14.338873  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:14.338960  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:14.376537  800812 cri.go:89] found id: ""
	I1007 13:39:14.376570  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.376582  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:14.376590  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:14.376648  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:14.411933  800812 cri.go:89] found id: ""
	I1007 13:39:14.411969  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.411981  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:14.411990  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:14.412057  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:14.449007  800812 cri.go:89] found id: ""
	I1007 13:39:14.449049  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.449060  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:14.449069  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:14.449129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:14.489459  800812 cri.go:89] found id: ""
	I1007 13:39:14.489495  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.489507  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:14.489516  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:14.489575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:14.529717  800812 cri.go:89] found id: ""
	I1007 13:39:14.529747  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.529756  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:14.529765  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:14.529820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:14.566093  800812 cri.go:89] found id: ""
	I1007 13:39:14.566122  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.566129  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:14.566139  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:14.566156  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:14.640009  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:14.640037  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:14.640051  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:14.726151  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:14.726201  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.771158  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:14.771195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:14.824599  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:14.824644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:14.774418  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:12.209697  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:14.710276  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:15.514317  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.514843  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.339940  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:17.361437  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:17.361511  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:17.402518  800812 cri.go:89] found id: ""
	I1007 13:39:17.402555  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.402566  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:17.402575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:17.402645  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:17.454422  800812 cri.go:89] found id: ""
	I1007 13:39:17.454460  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.454472  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:17.454480  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:17.454552  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:17.497017  800812 cri.go:89] found id: ""
	I1007 13:39:17.497049  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.497060  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:17.497070  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:17.497142  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:17.534352  800812 cri.go:89] found id: ""
	I1007 13:39:17.534389  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.534399  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:17.534406  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:17.534461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:17.568185  800812 cri.go:89] found id: ""
	I1007 13:39:17.568216  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.568225  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:17.568232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:17.568291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:17.611138  800812 cri.go:89] found id: ""
	I1007 13:39:17.611171  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.611182  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:17.611191  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:17.611260  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:17.649494  800812 cri.go:89] found id: ""
	I1007 13:39:17.649527  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.649536  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:17.649544  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:17.649604  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:17.690104  800812 cri.go:89] found id: ""
	I1007 13:39:17.690140  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.690153  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:17.690166  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:17.690183  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:17.763419  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:17.763450  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:17.763467  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:17.841000  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:17.841050  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:17.879832  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:17.879862  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:17.932754  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:17.932796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.447864  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:20.462219  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:20.462301  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:20.499833  800812 cri.go:89] found id: ""
	I1007 13:39:20.499870  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.499881  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:20.499889  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:20.499990  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:20.538996  800812 cri.go:89] found id: ""
	I1007 13:39:20.539031  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.539043  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:20.539051  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:20.539132  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:20.575341  800812 cri.go:89] found id: ""
	I1007 13:39:20.575379  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.575391  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:20.575400  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:20.575470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:20.613527  800812 cri.go:89] found id: ""
	I1007 13:39:20.613562  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.613572  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:20.613582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:20.613657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:20.650651  800812 cri.go:89] found id: ""
	I1007 13:39:20.650686  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.650699  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:20.650709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:20.650769  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:20.689122  800812 cri.go:89] found id: ""
	I1007 13:39:20.689151  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.689160  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:20.689166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:20.689220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:20.725242  800812 cri.go:89] found id: ""
	I1007 13:39:20.725275  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.725284  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:20.725290  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:20.725348  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:20.759949  800812 cri.go:89] found id: ""
	I1007 13:39:20.759988  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.760000  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:20.760014  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:20.760028  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:20.804886  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:20.804922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:20.857652  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:20.857700  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.872182  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:20.872215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:39:17.842234  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:17.210309  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:19.210449  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:21.709672  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:20.014047  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:22.014646  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:24.015649  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	W1007 13:39:20.945413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:20.945439  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:20.945455  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:23.521232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:23.537035  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:23.537116  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:23.580100  800812 cri.go:89] found id: ""
	I1007 13:39:23.580141  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.580154  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:23.580162  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:23.580220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:23.622271  800812 cri.go:89] found id: ""
	I1007 13:39:23.622302  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.622313  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:23.622321  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:23.622390  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:23.658290  800812 cri.go:89] found id: ""
	I1007 13:39:23.658320  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.658335  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:23.658341  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:23.658398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:23.696510  800812 cri.go:89] found id: ""
	I1007 13:39:23.696543  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.696555  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:23.696564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:23.696624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:23.732913  800812 cri.go:89] found id: ""
	I1007 13:39:23.732947  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.732967  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:23.732974  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:23.733027  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:23.774502  800812 cri.go:89] found id: ""
	I1007 13:39:23.774540  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.774550  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:23.774557  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:23.774710  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:23.821217  800812 cri.go:89] found id: ""
	I1007 13:39:23.821258  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.821269  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:23.821278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:23.821350  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:23.864327  800812 cri.go:89] found id: ""
	I1007 13:39:23.864361  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.864373  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:23.864386  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:23.864404  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:23.918454  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:23.918505  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:23.933324  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:23.933363  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:24.015858  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:24.015879  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:24.015892  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:24.096557  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:24.096609  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:23.926328  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:26.994313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:24.203346  800212 pod_ready.go:82] duration metric: took 4m0.00074454s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" ...
	E1007 13:39:24.203420  800212 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:39:24.203447  800212 pod_ready.go:39] duration metric: took 4m15.010484686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:39:24.203483  800212 kubeadm.go:597] duration metric: took 4m22.534978235s to restartPrimaryControlPlane
	W1007 13:39:24.203568  800212 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:24.203597  800212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:26.018248  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:28.513858  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:26.638856  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:26.654921  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:26.654989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:26.693714  800812 cri.go:89] found id: ""
	I1007 13:39:26.693747  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.693756  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:26.693764  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:26.693819  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:26.732730  800812 cri.go:89] found id: ""
	I1007 13:39:26.732762  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.732771  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:26.732778  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:26.732837  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:26.774239  800812 cri.go:89] found id: ""
	I1007 13:39:26.774272  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.774281  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:26.774288  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:26.774352  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:26.812547  800812 cri.go:89] found id: ""
	I1007 13:39:26.812597  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.812609  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:26.812619  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:26.812676  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:26.849472  800812 cri.go:89] found id: ""
	I1007 13:39:26.849501  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.849509  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:26.849515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:26.849572  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:26.885935  800812 cri.go:89] found id: ""
	I1007 13:39:26.885965  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.885974  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:26.885981  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:26.886052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:26.920629  800812 cri.go:89] found id: ""
	I1007 13:39:26.920659  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.920668  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:26.920674  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:26.920731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:26.959016  800812 cri.go:89] found id: ""
	I1007 13:39:26.959052  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.959065  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:26.959079  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:26.959095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:27.012308  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:27.012351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:27.027559  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:27.027602  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:27.111043  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:27.111070  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:27.111086  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:27.194428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:27.194476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:29.738163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:29.752869  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:29.752959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:29.791071  800812 cri.go:89] found id: ""
	I1007 13:39:29.791102  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.791111  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:29.791128  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:29.791206  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:29.837148  800812 cri.go:89] found id: ""
	I1007 13:39:29.837194  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.837207  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:29.837217  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:29.837291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:29.874334  800812 cri.go:89] found id: ""
	I1007 13:39:29.874371  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.874383  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:29.874391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:29.874463  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:29.915799  800812 cri.go:89] found id: ""
	I1007 13:39:29.915835  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.915852  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:29.915861  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:29.915923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:29.954557  800812 cri.go:89] found id: ""
	I1007 13:39:29.954589  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.954598  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:29.954604  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:29.954661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:29.990873  800812 cri.go:89] found id: ""
	I1007 13:39:29.990912  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.990925  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:29.990934  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:29.991019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:30.031687  800812 cri.go:89] found id: ""
	I1007 13:39:30.031738  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.031751  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:30.031763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:30.031872  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:30.071524  800812 cri.go:89] found id: ""
	I1007 13:39:30.071565  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.071579  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:30.071594  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:30.071614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:30.085558  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:30.085591  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:30.162897  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:30.162922  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:30.162935  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:30.244979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:30.245029  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:30.285065  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:30.285098  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:30.513894  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:33.013867  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:32.838701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:32.852755  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:32.852839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:32.890012  800812 cri.go:89] found id: ""
	I1007 13:39:32.890067  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.890079  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:32.890088  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:32.890156  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:32.928467  800812 cri.go:89] found id: ""
	I1007 13:39:32.928499  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.928508  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:32.928517  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:32.928578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:32.964908  800812 cri.go:89] found id: ""
	I1007 13:39:32.964944  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.964956  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:32.964965  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:32.965096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:32.999714  800812 cri.go:89] found id: ""
	I1007 13:39:32.999747  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.999773  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:32.999782  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:32.999849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:33.037889  800812 cri.go:89] found id: ""
	I1007 13:39:33.037924  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.037934  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:33.037946  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:33.038015  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:33.076192  800812 cri.go:89] found id: ""
	I1007 13:39:33.076226  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.076234  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:33.076241  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:33.076311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:33.112402  800812 cri.go:89] found id: ""
	I1007 13:39:33.112442  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.112455  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:33.112463  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:33.112527  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:33.151872  800812 cri.go:89] found id: ""
	I1007 13:39:33.151905  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.151916  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:33.151927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:33.151942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:33.203529  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:33.203572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:33.220050  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:33.220097  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:33.304000  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:33.304030  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:33.304046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:33.383979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:33.384038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:33.074393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:36.146280  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:35.015200  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:37.514925  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:35.929247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:35.943624  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:35.943691  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:35.980943  800812 cri.go:89] found id: ""
	I1007 13:39:35.980973  800812 logs.go:282] 0 containers: []
	W1007 13:39:35.980988  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:35.980996  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:35.981068  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:36.021834  800812 cri.go:89] found id: ""
	I1007 13:39:36.021868  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.021876  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:36.021882  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:36.021939  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:36.056651  800812 cri.go:89] found id: ""
	I1007 13:39:36.056687  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.056698  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:36.056706  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:36.056781  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:36.095332  800812 cri.go:89] found id: ""
	I1007 13:39:36.095360  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.095369  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:36.095376  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:36.095433  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:36.141361  800812 cri.go:89] found id: ""
	I1007 13:39:36.141403  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.141416  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:36.141424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:36.141485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:36.179122  800812 cri.go:89] found id: ""
	I1007 13:39:36.179155  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.179165  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:36.179171  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:36.179226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:36.212594  800812 cri.go:89] found id: ""
	I1007 13:39:36.212630  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.212642  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:36.212651  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:36.212723  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:36.253109  800812 cri.go:89] found id: ""
	I1007 13:39:36.253145  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.253156  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:36.253169  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:36.253187  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:36.327696  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:36.327729  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:36.327747  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:36.404687  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:36.404735  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:36.444913  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:36.444955  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:36.497657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:36.497711  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.013791  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:39.027274  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:39.027344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:39.061214  800812 cri.go:89] found id: ""
	I1007 13:39:39.061246  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.061254  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:39.061262  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:39.061323  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:39.096245  800812 cri.go:89] found id: ""
	I1007 13:39:39.096277  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.096288  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:39.096296  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:39.096373  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:39.137152  800812 cri.go:89] found id: ""
	I1007 13:39:39.137192  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.137204  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:39.137212  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:39.137279  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:39.172052  800812 cri.go:89] found id: ""
	I1007 13:39:39.172085  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.172094  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:39.172100  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:39.172161  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:39.208796  800812 cri.go:89] found id: ""
	I1007 13:39:39.208832  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.208843  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:39.208852  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:39.208923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:39.243568  800812 cri.go:89] found id: ""
	I1007 13:39:39.243598  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.243606  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:39.243613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:39.243669  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:39.279168  800812 cri.go:89] found id: ""
	I1007 13:39:39.279201  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.279209  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:39.279216  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:39.279276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:39.321347  800812 cri.go:89] found id: ""
	I1007 13:39:39.321373  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.321382  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:39.321391  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:39.321405  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:39.373936  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:39.373986  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.388225  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:39.388258  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:39.462454  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:39.462482  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:39.462500  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:39.545876  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:39.545931  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:40.015715  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.514458  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.094078  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:42.107800  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:42.107869  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:42.143781  800812 cri.go:89] found id: ""
	I1007 13:39:42.143818  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.143829  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:42.143837  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:42.143913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:42.186434  800812 cri.go:89] found id: ""
	I1007 13:39:42.186468  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.186479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:42.186490  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:42.186562  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:42.221552  800812 cri.go:89] found id: ""
	I1007 13:39:42.221588  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.221599  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:42.221608  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:42.221682  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:42.255536  800812 cri.go:89] found id: ""
	I1007 13:39:42.255574  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.255586  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:42.255593  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:42.255662  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:42.290067  800812 cri.go:89] found id: ""
	I1007 13:39:42.290103  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.290114  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:42.290126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:42.290197  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:42.326182  800812 cri.go:89] found id: ""
	I1007 13:39:42.326215  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.326225  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:42.326232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:42.326287  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:42.360560  800812 cri.go:89] found id: ""
	I1007 13:39:42.360594  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.360606  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:42.360616  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:42.360683  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:42.396242  800812 cri.go:89] found id: ""
	I1007 13:39:42.396270  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.396280  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:42.396291  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:42.396308  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.448101  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:42.448160  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:42.462617  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:42.462648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:42.541262  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:42.541288  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:42.541306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:42.617009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:42.617052  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.157272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:45.171699  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:45.171777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:45.213274  800812 cri.go:89] found id: ""
	I1007 13:39:45.213311  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.213322  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:45.213331  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:45.213393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:45.252304  800812 cri.go:89] found id: ""
	I1007 13:39:45.252339  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.252348  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:45.252355  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:45.252408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:45.289702  800812 cri.go:89] found id: ""
	I1007 13:39:45.289739  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.289751  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:45.289758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:45.289824  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:45.325776  800812 cri.go:89] found id: ""
	I1007 13:39:45.325815  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.325827  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:45.325836  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:45.325909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:45.362636  800812 cri.go:89] found id: ""
	I1007 13:39:45.362672  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.362683  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:45.362692  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:45.362764  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:45.405058  800812 cri.go:89] found id: ""
	I1007 13:39:45.405090  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.405100  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:45.405108  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:45.405174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:45.439752  800812 cri.go:89] found id: ""
	I1007 13:39:45.439783  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.439793  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:45.439802  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:45.439866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:45.476336  800812 cri.go:89] found id: ""
	I1007 13:39:45.476369  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.476377  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:45.476388  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:45.476402  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:45.489707  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:45.489739  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:45.564593  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:45.564626  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:45.564645  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:45.639136  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:45.639184  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.684415  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:45.684458  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.226242  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.298298  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.013741  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:47.014360  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:49.015110  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:48.245534  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:48.260357  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:48.260425  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:48.297561  800812 cri.go:89] found id: ""
	I1007 13:39:48.297591  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.297599  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:48.297606  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:48.297661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:48.332654  800812 cri.go:89] found id: ""
	I1007 13:39:48.332694  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.332705  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:48.332715  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:48.332783  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:48.370775  800812 cri.go:89] found id: ""
	I1007 13:39:48.370818  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.370829  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:48.370837  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:48.370895  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:48.409282  800812 cri.go:89] found id: ""
	I1007 13:39:48.409318  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.409329  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:48.409338  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:48.409415  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:48.448602  800812 cri.go:89] found id: ""
	I1007 13:39:48.448634  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.448642  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:48.448648  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:48.448702  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:48.483527  800812 cri.go:89] found id: ""
	I1007 13:39:48.483556  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.483565  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:48.483572  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:48.483627  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:48.519600  800812 cri.go:89] found id: ""
	I1007 13:39:48.519636  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.519645  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:48.519657  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:48.519725  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:48.559446  800812 cri.go:89] found id: ""
	I1007 13:39:48.559481  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.559493  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:48.559505  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:48.559523  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:48.575824  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:48.575879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:48.660033  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:48.660067  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:48.660083  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:48.738011  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:48.738077  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:48.781399  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:48.781439  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:50.616036  800212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.41240969s)
	I1007 13:39:50.616124  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:50.638334  800212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:50.654214  800212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:50.672345  800212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:50.672370  800212 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:50.672429  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:50.699073  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:50.699139  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:50.711774  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:50.737818  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:50.737885  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:50.749603  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.760893  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:50.760965  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.771572  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:50.781793  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:50.781856  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:50.793541  800212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:50.851411  800212 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:39:50.851486  800212 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:50.967773  800212 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:50.967938  800212 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:50.968105  800212 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:39:50.976935  800212 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:51.378305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:50.979096  800212 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:50.979227  800212 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:50.979291  800212 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:50.979375  800212 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:50.979467  800212 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:50.979560  800212 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:50.979634  800212 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:50.979717  800212 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:50.979789  800212 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:50.979857  800212 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:50.979925  800212 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:50.979959  800212 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:50.980011  800212 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:51.280206  800212 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:51.430988  800212 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:39:51.677074  800212 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:51.867985  800212 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:52.283613  800212 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:52.284108  800212 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:52.288874  800212 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.333296  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:51.346939  800812 kubeadm.go:597] duration metric: took 4m4.08487661s to restartPrimaryControlPlane
	W1007 13:39:51.347039  800812 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:51.347070  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:51.822215  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:51.841443  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:51.854663  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:51.868065  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:51.868079  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:51.868140  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:51.879052  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:51.879133  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:51.889979  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:51.901929  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:51.902007  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:51.912958  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.923420  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:51.923492  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.934307  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:51.944066  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:51.944138  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:51.954170  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:52.028915  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:39:52.028973  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:52.180138  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:52.180312  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:52.180457  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:39:52.377920  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:52.379989  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:52.380160  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:52.380267  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:52.380407  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:52.380499  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:52.380607  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:52.380700  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:52.381700  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:52.382420  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:52.383189  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:52.384091  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:52.384191  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:52.384372  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:52.769185  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:52.870841  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:52.958399  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:53.168169  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:53.192475  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:53.193447  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:53.193519  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:53.355310  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.514892  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.515960  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.358443  800812 out.go:235]   - Booting up control plane ...
	I1007 13:39:53.358593  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:53.365515  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:53.366449  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:53.367325  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:53.369598  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:39:54.454391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:52.290945  800212 out.go:235]   - Booting up control plane ...
	I1007 13:39:52.291058  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:52.291164  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:52.291610  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:52.312059  800212 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:52.318321  800212 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:52.318412  800212 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:52.456671  800212 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:39:52.456802  800212 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:39:52.958340  800212 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.579104ms
	I1007 13:39:52.958484  800212 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:39:57.959379  800212 kubeadm.go:310] [api-check] The API server is healthy after 5.001260012s
	I1007 13:39:57.980499  800212 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:39:57.999006  800212 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:39:58.043754  800212 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:39:58.044050  800212 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-653322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:39:58.062167  800212 kubeadm.go:310] [bootstrap-token] Using token: 72a6vd.dmbcvepur9l2dhmv
	I1007 13:39:58.064163  800212 out.go:235]   - Configuring RBAC rules ...
	I1007 13:39:58.064326  800212 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:39:58.079082  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:39:58.094414  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:39:58.099862  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:39:58.109846  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:39:58.122572  800212 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:39:58.370342  800212 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:39:58.808645  800212 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:39:59.367759  800212 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:39:59.368708  800212 kubeadm.go:310] 
	I1007 13:39:59.368834  800212 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:39:59.368859  800212 kubeadm.go:310] 
	I1007 13:39:59.368976  800212 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:39:59.368991  800212 kubeadm.go:310] 
	I1007 13:39:59.369031  800212 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:39:59.369111  800212 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:39:59.369155  800212 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:39:59.369162  800212 kubeadm.go:310] 
	I1007 13:39:59.369217  800212 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:39:59.369245  800212 kubeadm.go:310] 
	I1007 13:39:59.369317  800212 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:39:59.369329  800212 kubeadm.go:310] 
	I1007 13:39:59.369390  800212 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:39:59.369487  800212 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:39:59.369588  800212 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:39:59.369600  800212 kubeadm.go:310] 
	I1007 13:39:59.369722  800212 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:39:59.369826  800212 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:39:59.369838  800212 kubeadm.go:310] 
	I1007 13:39:59.369960  800212 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370113  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:39:59.370151  800212 kubeadm.go:310] 	--control-plane 
	I1007 13:39:59.370160  800212 kubeadm.go:310] 
	I1007 13:39:59.370302  800212 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:39:59.370331  800212 kubeadm.go:310] 
	I1007 13:39:59.370458  800212 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370592  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:39:59.371701  800212 kubeadm.go:310] W1007 13:39:50.802353    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372082  800212 kubeadm.go:310] W1007 13:39:50.803265    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372217  800212 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:39:59.372252  800212 cni.go:84] Creating CNI manager for ""
	I1007 13:39:59.372266  800212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:39:59.374383  800212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:39:56.015201  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:58.517383  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:00.534326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:59.376063  800212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:39:59.389097  800212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:39:59.409782  800212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:39:59.409864  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:39:59.409859  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-653322 minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=embed-certs-653322 minikube.k8s.io/primary=true
	I1007 13:39:59.451756  800212 ops.go:34] apiserver oom_adj: -16
	I1007 13:39:59.647019  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.147361  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.647505  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.147866  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.647444  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.147271  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.647066  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.147382  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.647825  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.796730  800212 kubeadm.go:1113] duration metric: took 4.386947643s to wait for elevateKubeSystemPrivileges
	I1007 13:40:03.796776  800212 kubeadm.go:394] duration metric: took 5m2.178460784s to StartCluster
	I1007 13:40:03.796802  800212 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.796927  800212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:40:03.800809  800212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.801152  800212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:40:03.801235  800212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:40:03.801341  800212 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-653322"
	I1007 13:40:03.801366  800212 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-653322"
	W1007 13:40:03.801374  800212 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:40:03.801380  800212 addons.go:69] Setting default-storageclass=true in profile "embed-certs-653322"
	I1007 13:40:03.801397  800212 addons.go:69] Setting metrics-server=true in profile "embed-certs-653322"
	I1007 13:40:03.801418  800212 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:40:03.801428  800212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-653322"
	I1007 13:40:03.801442  800212 addons.go:234] Setting addon metrics-server=true in "embed-certs-653322"
	W1007 13:40:03.801452  800212 addons.go:243] addon metrics-server should already be in state true
	I1007 13:40:03.801485  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801411  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801854  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801895  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801901  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.801908  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801937  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.802059  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.803364  800212 out.go:177] * Verifying Kubernetes components...
	I1007 13:40:03.805464  800212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:40:03.820021  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I1007 13:40:03.820297  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1007 13:40:03.820632  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.820812  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.821460  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821482  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.821598  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I1007 13:40:03.821627  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821639  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.822131  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822377  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.822388  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822769  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822823  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.822938  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822990  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.823583  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.823609  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.824011  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.824209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.828672  800212 addons.go:234] Setting addon default-storageclass=true in "embed-certs-653322"
	W1007 13:40:03.828697  800212 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:40:03.828731  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.829118  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.829169  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.839251  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1007 13:40:03.839800  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.840506  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.840533  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.840894  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.841130  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.842660  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1007 13:40:03.843181  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.843235  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.843819  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.843841  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.844191  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.844469  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.845247  800212 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:40:03.846191  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.846688  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:40:03.846712  800212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:40:03.846737  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.847801  800212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:40:01.015857  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.515782  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.849482  800212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:03.849504  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:40:03.849528  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.851923  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852765  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.852798  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852987  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.853209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.853367  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.853482  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.854532  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I1007 13:40:03.854540  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855100  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.855127  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855438  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.855484  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.855836  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.856149  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.856179  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.856258  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.856436  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.856791  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.857523  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.857572  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.873780  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I1007 13:40:03.874162  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.874943  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.874958  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.875358  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.875581  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.877658  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.877924  800212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:03.877940  800212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:40:03.877962  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.881764  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882241  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.882272  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882619  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.882839  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.882999  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.883146  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:04.059493  800212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:40:04.092602  800212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135614  800212 node_ready.go:49] node "embed-certs-653322" has status "Ready":"True"
	I1007 13:40:04.135639  800212 node_ready.go:38] duration metric: took 42.999262ms for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135649  800212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:04.168633  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:04.177323  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:04.206431  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:04.358331  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:40:04.358360  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:40:04.453932  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:40:04.453978  800212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:40:04.543045  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:04.543079  800212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:40:04.628016  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:05.373199  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166722968s)
	I1007 13:40:05.373269  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373286  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373188  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195822413s)
	I1007 13:40:05.373374  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373395  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373726  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373746  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373756  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373764  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373772  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.373786  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373798  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373810  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373819  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.374033  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374019  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374056  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.374077  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374104  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374123  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.449400  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.449435  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.449768  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.449785  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034194  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.406118465s)
	I1007 13:40:06.034270  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034292  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034583  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034603  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034613  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034620  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034852  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:06.034920  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034951  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034967  800212 addons.go:475] Verifying addon metrics-server=true in "embed-certs-653322"
	I1007 13:40:06.036901  800212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:40:03.602357  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:06.038108  800212 addons.go:510] duration metric: took 2.236891318s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:40:06.178973  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:06.015270  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.514554  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.675453  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:10.182593  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.182620  800212 pod_ready.go:82] duration metric: took 6.013956349s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.182630  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189183  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.189216  800212 pod_ready.go:82] duration metric: took 6.578623ms for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189229  800212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195272  800212 pod_ready.go:93] pod "etcd-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.195298  800212 pod_ready.go:82] duration metric: took 6.06024ms for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195308  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203341  800212 pod_ready.go:93] pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.203365  800212 pod_ready.go:82] duration metric: took 8.050464ms for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203375  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209333  800212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.209364  800212 pod_ready.go:82] duration metric: took 5.980877ms for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209377  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573541  800212 pod_ready.go:93] pod "kube-proxy-z9r92" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.573574  800212 pod_ready.go:82] duration metric: took 364.188673ms for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573586  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973294  800212 pod_ready.go:93] pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.973325  800212 pod_ready.go:82] duration metric: took 399.732244ms for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973334  800212 pod_ready.go:39] duration metric: took 6.837673484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:10.973354  800212 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:40:10.973424  800212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:40:10.989629  800212 api_server.go:72] duration metric: took 7.188432004s to wait for apiserver process to appear ...
	I1007 13:40:10.989661  800212 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:40:10.989690  800212 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I1007 13:40:10.994679  800212 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I1007 13:40:10.995855  800212 api_server.go:141] control plane version: v1.31.1
	I1007 13:40:10.995882  800212 api_server.go:131] duration metric: took 6.212413ms to wait for apiserver health ...
	I1007 13:40:10.995894  800212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:40:11.176174  800212 system_pods.go:59] 9 kube-system pods found
	I1007 13:40:11.176207  800212 system_pods.go:61] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.176213  800212 system_pods.go:61] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.176217  800212 system_pods.go:61] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.176221  800212 system_pods.go:61] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.176225  800212 system_pods.go:61] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.176228  800212 system_pods.go:61] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.176231  800212 system_pods.go:61] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.176236  800212 system_pods.go:61] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.176240  800212 system_pods.go:61] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.176251  800212 system_pods.go:74] duration metric: took 180.350309ms to wait for pod list to return data ...
	I1007 13:40:11.176258  800212 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:40:11.374362  800212 default_sa.go:45] found service account: "default"
	I1007 13:40:11.374397  800212 default_sa.go:55] duration metric: took 198.130993ms for default service account to be created ...
	I1007 13:40:11.374410  800212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:40:11.577087  800212 system_pods.go:86] 9 kube-system pods found
	I1007 13:40:11.577124  800212 system_pods.go:89] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.577130  800212 system_pods.go:89] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.577134  800212 system_pods.go:89] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.577138  800212 system_pods.go:89] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.577141  800212 system_pods.go:89] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.577145  800212 system_pods.go:89] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.577149  800212 system_pods.go:89] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.577157  800212 system_pods.go:89] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.577161  800212 system_pods.go:89] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.577171  800212 system_pods.go:126] duration metric: took 202.754732ms to wait for k8s-apps to be running ...
	I1007 13:40:11.577179  800212 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:40:11.577228  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:40:11.595122  800212 system_svc.go:56] duration metric: took 17.926197ms WaitForService to wait for kubelet
	I1007 13:40:11.595159  800212 kubeadm.go:582] duration metric: took 7.793966621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:40:11.595189  800212 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:40:11.774788  800212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:40:11.774819  800212 node_conditions.go:123] node cpu capacity is 2
	I1007 13:40:11.774833  800212 node_conditions.go:105] duration metric: took 179.638486ms to run NodePressure ...
	I1007 13:40:11.774845  800212 start.go:241] waiting for startup goroutines ...
	I1007 13:40:11.774852  800212 start.go:246] waiting for cluster config update ...
	I1007 13:40:11.774862  800212 start.go:255] writing updated cluster config ...
	I1007 13:40:11.775199  800212 ssh_runner.go:195] Run: rm -f paused
	I1007 13:40:11.829109  800212 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:40:11.831389  800212 out.go:177] * Done! kubectl is now configured to use "embed-certs-653322" cluster and "default" namespace by default
	I1007 13:40:09.682305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:11.014595  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:13.514109  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:12.754391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:16.015105  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.513935  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.834414  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.906376  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.015129  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:23.518245  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:26.014981  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:28.513904  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:27.986365  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.058375  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.015269  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.514729  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.370670  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:40:33.371065  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:33.371255  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:36.013424  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.014881  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.507584  800087 pod_ready.go:82] duration metric: took 4m0.000325195s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" ...
	E1007 13:40:38.507633  800087 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:40:38.507657  800087 pod_ready.go:39] duration metric: took 4m14.542185527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:38.507694  800087 kubeadm.go:597] duration metric: took 4m21.215120888s to restartPrimaryControlPlane
	W1007 13:40:38.507784  800087 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:40:38.507824  800087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:40:38.371494  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:38.371681  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:37.138368  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:40.210391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:46.290312  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:48.371961  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:48.372225  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:49.362313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:55.442333  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:58.514279  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:04.757708  800087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.249856079s)
	I1007 13:41:04.757796  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:04.787393  800087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:41:04.805311  800087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:04.819815  800087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:04.819839  800087 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:04.819889  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:04.832607  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:04.832673  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:04.847624  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:04.859808  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:04.859890  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:04.886041  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.896677  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:04.896746  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.906688  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:04.915884  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:04.915965  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:04.925767  800087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:04.981704  800087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:41:04.981799  800087 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:05.104530  800087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:05.104648  800087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:05.104750  800087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:41:05.114782  800087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:05.116948  800087 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:05.117074  800087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:05.117168  800087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:05.117275  800087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:05.117358  800087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:05.117447  800087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:05.117522  800087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:05.117620  800087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:05.117733  800087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:05.117851  800087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:05.117961  800087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:05.118055  800087 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:05.118147  800087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:05.216990  800087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:05.548814  800087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:41:05.921322  800087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:06.206950  800087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:06.412087  800087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:06.412698  800087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:06.415768  800087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:04.598286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:06.418055  800087 out.go:235]   - Booting up control plane ...
	I1007 13:41:06.418195  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:06.419324  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:06.420095  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:06.437974  800087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:06.447497  800087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:06.447580  800087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:06.582080  800087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:41:06.582223  800087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:41:07.583021  800087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001204833s
	I1007 13:41:07.583165  800087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:41:08.372715  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:08.372913  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:07.666427  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:13.085728  800087 kubeadm.go:310] [api-check] The API server is healthy after 5.502732546s
	I1007 13:41:13.105047  800087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:41:13.122083  800087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:41:13.157464  800087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:41:13.157751  800087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-016701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:41:13.176062  800087 kubeadm.go:310] [bootstrap-token] Using token: ott6bx.mfcul37ilsfpftrr
	I1007 13:41:13.177574  800087 out.go:235]   - Configuring RBAC rules ...
	I1007 13:41:13.177739  800087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:41:13.184629  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:41:13.200989  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:41:13.206521  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:41:13.212338  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:41:13.217063  800087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:41:13.493012  800087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:41:13.926154  800087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:41:14.500818  800087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:41:14.500844  800087 kubeadm.go:310] 
	I1007 13:41:14.500894  800087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:41:14.500899  800087 kubeadm.go:310] 
	I1007 13:41:14.500988  800087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:41:14.501001  800087 kubeadm.go:310] 
	I1007 13:41:14.501041  800087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:41:14.501095  800087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:41:14.501196  800087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:41:14.501223  800087 kubeadm.go:310] 
	I1007 13:41:14.501307  800087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:41:14.501316  800087 kubeadm.go:310] 
	I1007 13:41:14.501379  800087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:41:14.501448  800087 kubeadm.go:310] 
	I1007 13:41:14.501533  800087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:41:14.501629  800087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:41:14.501733  800087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:41:14.501750  800087 kubeadm.go:310] 
	I1007 13:41:14.501854  800087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:41:14.501964  800087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:41:14.501973  800087 kubeadm.go:310] 
	I1007 13:41:14.502109  800087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502269  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:41:14.502311  800087 kubeadm.go:310] 	--control-plane 
	I1007 13:41:14.502322  800087 kubeadm.go:310] 
	I1007 13:41:14.502443  800087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:41:14.502453  800087 kubeadm.go:310] 
	I1007 13:41:14.502600  800087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502755  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:41:14.503943  800087 kubeadm.go:310] W1007 13:41:04.948448    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504331  800087 kubeadm.go:310] W1007 13:41:04.949311    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504448  800087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:14.504466  800087 cni.go:84] Creating CNI manager for ""
	I1007 13:41:14.504474  800087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:41:14.506680  800087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:41:14.508369  800087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:41:14.520414  800087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:41:14.544669  800087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:41:14.544833  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:14.544851  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-016701 minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=no-preload-016701 minikube.k8s.io/primary=true
	I1007 13:41:14.772594  800087 ops.go:34] apiserver oom_adj: -16
	I1007 13:41:14.772619  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:13.746372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:16.822393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:15.273211  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:15.772786  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.273580  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.773395  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.272868  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.773484  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.273717  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.405010  800087 kubeadm.go:1113] duration metric: took 3.86025273s to wait for elevateKubeSystemPrivileges
	I1007 13:41:18.405055  800087 kubeadm.go:394] duration metric: took 5m1.164485599s to StartCluster
	I1007 13:41:18.405081  800087 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.405182  800087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:41:18.406935  800087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.407244  800087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.197 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:41:18.407398  800087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:41:18.407513  800087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-016701"
	I1007 13:41:18.407539  800087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-016701"
	W1007 13:41:18.407549  800087 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:41:18.407548  800087 addons.go:69] Setting default-storageclass=true in profile "no-preload-016701"
	I1007 13:41:18.407572  800087 addons.go:69] Setting metrics-server=true in profile "no-preload-016701"
	I1007 13:41:18.407615  800087 addons.go:234] Setting addon metrics-server=true in "no-preload-016701"
	W1007 13:41:18.407721  800087 addons.go:243] addon metrics-server should already be in state true
	I1007 13:41:18.407850  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407591  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407545  800087 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:41:18.407594  800087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-016701"
	I1007 13:41:18.408374  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408387  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408417  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408424  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408447  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408542  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.409406  800087 out.go:177] * Verifying Kubernetes components...
	I1007 13:41:18.411018  800087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:41:18.425614  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I1007 13:41:18.426275  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.426764  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I1007 13:41:18.426926  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.426956  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427308  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.427410  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.427840  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.427862  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427976  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.428024  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.428257  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.428470  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.428478  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I1007 13:41:18.428980  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.429578  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.429605  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.429927  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.430564  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.430602  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.431895  800087 addons.go:234] Setting addon default-storageclass=true in "no-preload-016701"
	W1007 13:41:18.431918  800087 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:41:18.431952  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.432279  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.432319  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.445003  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1007 13:41:18.445514  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.445968  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1007 13:41:18.446101  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.446125  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.446534  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.446580  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.446821  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.447159  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.447187  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.447559  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.447754  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.449595  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.450543  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.452177  800087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:41:18.452788  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I1007 13:41:18.453311  800087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:41:18.453332  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.454421  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.454443  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.454767  800087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.454791  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:41:18.454813  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.454902  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.455260  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:41:18.455277  800087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:41:18.455293  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.455514  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.455574  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.458904  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459133  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459321  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459529  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459681  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459699  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459704  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.459849  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.459962  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459994  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.460161  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.460349  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.460480  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.495484  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1007 13:41:18.496027  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.496790  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.496828  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.497324  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.497537  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.499178  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.499425  800087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.499440  800087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:41:18.499457  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.502808  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503337  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.503363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503573  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.503796  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.503972  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.504135  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.607501  800087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:41:18.631538  800087 node_ready.go:35] waiting up to 6m0s for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645041  800087 node_ready.go:49] node "no-preload-016701" has status "Ready":"True"
	I1007 13:41:18.645065  800087 node_ready.go:38] duration metric: took 13.492405ms for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645076  800087 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:18.651831  800087 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:18.689502  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.714363  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:41:18.714386  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:41:18.738095  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.794344  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:41:18.794384  800087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:41:18.906848  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:18.906886  800087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:41:18.991553  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:19.434333  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434360  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434687  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.434701  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434710  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434716  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434932  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434987  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435004  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.435015  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434993  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435269  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435274  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435282  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.435290  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.435297  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.436889  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.436909  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.456678  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.456714  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.457112  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.457133  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.457164  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.382548  800087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.390945966s)
	I1007 13:41:20.382614  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.382628  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.382952  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383052  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383068  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.383077  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.383010  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.383354  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383370  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383384  800087 addons.go:475] Verifying addon metrics-server=true in "no-preload-016701"
	I1007 13:41:20.385366  800087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:41:20.386603  800087 addons.go:510] duration metric: took 1.979211294s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:41:20.665725  800087 pod_ready.go:103] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"False"
	I1007 13:41:22.158063  800087 pod_ready.go:93] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:22.158090  800087 pod_ready.go:82] duration metric: took 3.506228901s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:22.158100  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165304  800087 pod_ready.go:93] pod "kube-apiserver-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.165330  800087 pod_ready.go:82] duration metric: took 2.007223213s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165340  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172907  800087 pod_ready.go:93] pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.172930  800087 pod_ready.go:82] duration metric: took 7.583143ms for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172939  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180216  800087 pod_ready.go:93] pod "kube-proxy-bjqg2" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.180243  800087 pod_ready.go:82] duration metric: took 7.297732ms for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180255  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185080  800087 pod_ready.go:93] pod "kube-scheduler-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.185108  800087 pod_ready.go:82] duration metric: took 4.84549ms for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185119  800087 pod_ready.go:39] duration metric: took 5.540032302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:24.185141  800087 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:41:24.185197  800087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:41:24.201360  800087 api_server.go:72] duration metric: took 5.794073168s to wait for apiserver process to appear ...
	I1007 13:41:24.201464  800087 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:41:24.201496  800087 api_server.go:253] Checking apiserver healthz at https://192.168.39.197:8443/healthz ...
	I1007 13:41:24.207141  800087 api_server.go:279] https://192.168.39.197:8443/healthz returned 200:
	ok
	I1007 13:41:24.208456  800087 api_server.go:141] control plane version: v1.31.1
	I1007 13:41:24.208481  800087 api_server.go:131] duration metric: took 7.007742ms to wait for apiserver health ...
	I1007 13:41:24.208491  800087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:41:24.213660  800087 system_pods.go:59] 9 kube-system pods found
	I1007 13:41:24.213693  800087 system_pods.go:61] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213701  800087 system_pods.go:61] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213711  800087 system_pods.go:61] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.213716  800087 system_pods.go:61] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.213719  800087 system_pods.go:61] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.213722  800087 system_pods.go:61] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.213725  800087 system_pods.go:61] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.213730  800087 system_pods.go:61] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.213734  800087 system_pods.go:61] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.213742  800087 system_pods.go:74] duration metric: took 5.244677ms to wait for pod list to return data ...
	I1007 13:41:24.213749  800087 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:41:24.216891  800087 default_sa.go:45] found service account: "default"
	I1007 13:41:24.216923  800087 default_sa.go:55] duration metric: took 3.165762ms for default service account to be created ...
	I1007 13:41:24.216936  800087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:41:24.366926  800087 system_pods.go:86] 9 kube-system pods found
	I1007 13:41:24.366962  800087 system_pods.go:89] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366970  800087 system_pods.go:89] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366977  800087 system_pods.go:89] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.366982  800087 system_pods.go:89] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.366986  800087 system_pods.go:89] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.366990  800087 system_pods.go:89] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.366993  800087 system_pods.go:89] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.366998  800087 system_pods.go:89] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.367001  800087 system_pods.go:89] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.367011  800087 system_pods.go:126] duration metric: took 150.068129ms to wait for k8s-apps to be running ...
	I1007 13:41:24.367018  800087 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:41:24.367064  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:24.383197  800087 system_svc.go:56] duration metric: took 16.165166ms WaitForService to wait for kubelet
	I1007 13:41:24.383232  800087 kubeadm.go:582] duration metric: took 5.975954284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:41:24.383256  800087 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:41:24.563433  800087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:41:24.563469  800087 node_conditions.go:123] node cpu capacity is 2
	I1007 13:41:24.563486  800087 node_conditions.go:105] duration metric: took 180.224622ms to run NodePressure ...
	I1007 13:41:24.563503  800087 start.go:241] waiting for startup goroutines ...
	I1007 13:41:24.563514  800087 start.go:246] waiting for cluster config update ...
	I1007 13:41:24.563529  800087 start.go:255] writing updated cluster config ...
	I1007 13:41:24.563898  800087 ssh_runner.go:195] Run: rm -f paused
	I1007 13:41:24.619289  800087 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:41:24.621527  800087 out.go:177] * Done! kubectl is now configured to use "no-preload-016701" cluster and "default" namespace by default
	I1007 13:41:22.898326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:25.970388  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:32.050353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:35.122329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:41.202320  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:44.274335  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:48.374723  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:48.375006  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.375034  800812 kubeadm.go:310] 
	I1007 13:41:48.375075  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:41:48.375132  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:41:48.375140  800812 kubeadm.go:310] 
	I1007 13:41:48.375183  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:41:48.375231  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:41:48.375369  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:41:48.375392  800812 kubeadm.go:310] 
	I1007 13:41:48.375514  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:41:48.375568  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:41:48.375617  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:41:48.375626  800812 kubeadm.go:310] 
	I1007 13:41:48.375747  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:41:48.375877  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:41:48.375895  800812 kubeadm.go:310] 
	I1007 13:41:48.376053  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:41:48.376140  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:41:48.376211  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:41:48.376288  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:41:48.376302  800812 kubeadm.go:310] 
	I1007 13:41:48.376705  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:48.376830  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:41:48.376948  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:41:48.377115  800812 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:41:48.377169  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:41:48.848117  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:48.863751  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:48.874610  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:48.874642  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:48.874715  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:48.886201  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:48.886279  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:48.897494  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:48.908398  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:48.908481  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:48.921409  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.931814  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:48.931882  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.943484  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:48.955060  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:48.955245  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:48.966391  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:49.042441  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:41:49.042521  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:49.203488  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:49.203603  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:49.203715  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:41:49.410381  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:49.412411  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:49.412520  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:49.412591  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:49.412694  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:49.412816  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:49.412940  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:49.412999  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:49.413053  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:49.413105  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:49.413196  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:49.413283  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:49.413319  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:49.413396  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:49.634922  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:49.724221  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:49.804768  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:49.980061  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:50.000515  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:50.000858  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:50.001053  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:50.163951  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:50.166163  800812 out.go:235]   - Booting up control plane ...
	I1007 13:41:50.166331  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:50.180837  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:50.181963  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:50.184140  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:50.190548  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:41:50.354360  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:53.426359  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:59.510321  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:02.578322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:08.658292  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:11.730352  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:17.810322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:20.882397  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:26.962343  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:30.192477  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:42:30.192790  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:30.193025  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:30.034345  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:35.193544  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:35.193820  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:36.114353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:39.186453  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:45.194245  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:45.194449  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:45.266293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:48.338329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:54.418332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:57.490294  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:05.194833  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:05.195103  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:03.570372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:06.642286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:09.643253  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:09.643290  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643598  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:09.643627  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:09.645347  802960 machine.go:96] duration metric: took 4m37.397836997s to provisionDockerMachine
	I1007 13:43:09.645389  802960 fix.go:56] duration metric: took 4m37.421085967s for fixHost
	I1007 13:43:09.645394  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 4m37.421104002s
	W1007 13:43:09.645409  802960 start.go:714] error starting host: provision: host is not running
	W1007 13:43:09.645530  802960 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 13:43:09.645542  802960 start.go:729] Will try again in 5 seconds ...
	I1007 13:43:14.646206  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:43:14.646330  802960 start.go:364] duration metric: took 74.211µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:43:14.646374  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:43:14.646382  802960 fix.go:54] fixHost starting: 
	I1007 13:43:14.646717  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:43:14.646746  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:43:14.662426  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I1007 13:43:14.663016  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:43:14.663790  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:43:14.663822  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:43:14.664176  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:43:14.664429  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:14.664605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:43:14.666440  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Stopped err=<nil>
	I1007 13:43:14.666467  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	W1007 13:43:14.666648  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:43:14.668507  802960 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-489319" ...
	I1007 13:43:14.669973  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Start
	I1007 13:43:14.670294  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring networks are active...
	I1007 13:43:14.671299  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network default is active
	I1007 13:43:14.671623  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network mk-default-k8s-diff-port-489319 is active
	I1007 13:43:14.672332  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Getting domain xml...
	I1007 13:43:14.673106  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Creating domain...
	I1007 13:43:15.035227  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting to get IP...
	I1007 13:43:15.036226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036673  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036768  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.036657  804186 retry.go:31] will retry after 204.852009ms: waiting for machine to come up
	I1007 13:43:15.243827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244610  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244699  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.244581  804186 retry.go:31] will retry after 334.887784ms: waiting for machine to come up
	I1007 13:43:15.581226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581717  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581747  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.581665  804186 retry.go:31] will retry after 354.992125ms: waiting for machine to come up
	I1007 13:43:15.938078  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938577  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938614  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.938518  804186 retry.go:31] will retry after 592.784389ms: waiting for machine to come up
	I1007 13:43:16.533531  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534103  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534128  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:16.534054  804186 retry.go:31] will retry after 756.034822ms: waiting for machine to come up
	I1007 13:43:17.291995  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292785  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292807  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:17.292736  804186 retry.go:31] will retry after 896.816081ms: waiting for machine to come up
	I1007 13:43:18.191016  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191527  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191560  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:18.191466  804186 retry.go:31] will retry after 1.08609499s: waiting for machine to come up
	I1007 13:43:19.280109  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280537  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280576  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:19.280520  804186 retry.go:31] will retry after 1.392221474s: waiting for machine to come up
	I1007 13:43:20.674622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675071  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675115  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:20.675031  804186 retry.go:31] will retry after 1.78021676s: waiting for machine to come up
	I1007 13:43:22.457647  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458248  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:22.458160  804186 retry.go:31] will retry after 2.117086662s: waiting for machine to come up
	I1007 13:43:24.576838  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577415  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577445  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:24.577364  804186 retry.go:31] will retry after 2.850833043s: waiting for machine to come up
	I1007 13:43:27.432222  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432855  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432882  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:27.432789  804186 retry.go:31] will retry after 3.63047619s: waiting for machine to come up
	I1007 13:43:31.065089  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.065729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Found IP for machine: 192.168.61.101
	I1007 13:43:31.065759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserving static IP address...
	I1007 13:43:31.065782  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has current primary IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.066317  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.066362  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserved static IP address: 192.168.61.101
	I1007 13:43:31.066395  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | skip adding static IP to network mk-default-k8s-diff-port-489319 - found existing host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"}
	I1007 13:43:31.066407  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for SSH to be available...
	I1007 13:43:31.066449  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Getting to WaitForSSH function...
	I1007 13:43:31.068871  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069233  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.069265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH client type: external
	I1007 13:43:31.069398  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa (-rw-------)
	I1007 13:43:31.069451  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:43:31.069466  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | About to run SSH command:
	I1007 13:43:31.069475  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | exit 0
	I1007 13:43:31.194580  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | SSH cmd err, output: <nil>: 
	I1007 13:43:31.195021  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetConfigRaw
	I1007 13:43:31.195801  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.198966  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199324  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.199359  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199635  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:43:31.199893  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:43:31.199919  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:31.200168  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.202444  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202817  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.202849  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202989  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.203185  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203352  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.203683  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.203930  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.203943  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:43:31.307182  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:43:31.307224  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307497  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:31.307525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307722  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.310462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.310835  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.310905  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.311014  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.311192  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311437  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311613  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.311794  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.311969  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.311981  802960 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489319 && echo "default-k8s-diff-port-489319" | sudo tee /etc/hostname
	I1007 13:43:31.436251  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489319
	
	I1007 13:43:31.436288  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.439927  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440241  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.440276  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440616  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.440887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441042  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441197  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.441360  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.441584  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.441612  802960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489319/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:43:31.552909  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:31.552947  802960 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:43:31.552983  802960 buildroot.go:174] setting up certificates
	I1007 13:43:31.553002  802960 provision.go:84] configureAuth start
	I1007 13:43:31.553012  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.553454  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.556642  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557015  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.557055  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.559909  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560460  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.560487  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560719  802960 provision.go:143] copyHostCerts
	I1007 13:43:31.560792  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:43:31.560812  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:43:31.560889  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:43:31.561045  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:43:31.561058  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:43:31.561084  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:43:31.561171  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:43:31.561180  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:43:31.561208  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:43:31.561271  802960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489319 san=[127.0.0.1 192.168.61.101 default-k8s-diff-port-489319 localhost minikube]
	I1007 13:43:31.871377  802960 provision.go:177] copyRemoteCerts
	I1007 13:43:31.871459  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:43:31.871489  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.874464  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.874887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.874925  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.875112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.875368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.875547  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.875675  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:31.957423  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:43:31.988554  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1007 13:43:32.018470  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:43:32.046799  802960 provision.go:87] duration metric: took 493.782862ms to configureAuth
	I1007 13:43:32.046830  802960 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:43:32.047021  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:43:32.047151  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.050313  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.050727  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.050760  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.051011  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.051216  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051385  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051522  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.051685  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.051878  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.051893  802960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:43:32.291927  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:43:32.291957  802960 machine.go:96] duration metric: took 1.092049658s to provisionDockerMachine
	I1007 13:43:32.291970  802960 start.go:293] postStartSetup for "default-k8s-diff-port-489319" (driver="kvm2")
	I1007 13:43:32.291985  802960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:43:32.292025  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.292491  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:43:32.292523  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.296195  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296625  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.296660  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296889  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.297104  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.297300  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.297479  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.377749  802960 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:43:32.382419  802960 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:43:32.382459  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:43:32.382557  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:43:32.382663  802960 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:43:32.382767  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:43:32.394059  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:32.422256  802960 start.go:296] duration metric: took 130.264438ms for postStartSetup
	I1007 13:43:32.422310  802960 fix.go:56] duration metric: took 17.775926417s for fixHost
	I1007 13:43:32.422340  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.425739  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.426254  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.426678  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426941  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.427080  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.427294  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.427305  802960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:43:32.531411  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728308612.494637714
	
	I1007 13:43:32.531442  802960 fix.go:216] guest clock: 1728308612.494637714
	I1007 13:43:32.531450  802960 fix.go:229] Guest: 2024-10-07 13:43:32.494637714 +0000 UTC Remote: 2024-10-07 13:43:32.422315329 +0000 UTC m=+300.358475670 (delta=72.322385ms)
	I1007 13:43:32.531474  802960 fix.go:200] guest clock delta is within tolerance: 72.322385ms
	I1007 13:43:32.531480  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 17.885135029s
	I1007 13:43:32.531503  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.531787  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:32.534783  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.535265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535472  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536178  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536404  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536518  802960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:43:32.536581  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.536697  802960 ssh_runner.go:195] Run: cat /version.json
	I1007 13:43:32.536729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.539709  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.539743  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540166  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540202  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540348  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540417  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540598  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540638  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540762  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.540777  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540884  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.540947  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.541089  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.642238  802960 ssh_runner.go:195] Run: systemctl --version
	I1007 13:43:32.649391  802960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:43:32.799266  802960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:43:32.805598  802960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:43:32.805707  802960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:43:32.823518  802960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:43:32.823560  802960 start.go:495] detecting cgroup driver to use...
	I1007 13:43:32.823651  802960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:43:32.842054  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:43:32.858474  802960 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:43:32.858550  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:43:32.873750  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:43:32.889165  802960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:43:33.019729  802960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:43:33.182269  802960 docker.go:233] disabling docker service ...
	I1007 13:43:33.182371  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:43:33.198610  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:43:33.213911  802960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:43:33.343594  802960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:43:33.476026  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:43:33.493130  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:43:33.513584  802960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:43:33.513652  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.525714  802960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:43:33.525816  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.538658  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.551146  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.564914  802960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:43:33.578180  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.590140  802960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.610967  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.624890  802960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:43:33.636736  802960 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:43:33.636825  802960 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:43:33.652573  802960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:43:33.665083  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:33.800780  802960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:43:33.898225  802960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:43:33.898309  802960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:43:33.903209  802960 start.go:563] Will wait 60s for crictl version
	I1007 13:43:33.903269  802960 ssh_runner.go:195] Run: which crictl
	I1007 13:43:33.907326  802960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:43:33.959008  802960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:43:33.959168  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:33.990929  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:34.023756  802960 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:43:34.025496  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:34.028784  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029327  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:34.029360  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029672  802960 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1007 13:43:34.034690  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:34.048101  802960 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:43:34.048259  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:43:34.048325  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:34.086926  802960 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:43:34.087050  802960 ssh_runner.go:195] Run: which lz4
	I1007 13:43:34.091973  802960 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:43:34.096623  802960 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:43:34.096671  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:43:35.604800  802960 crio.go:462] duration metric: took 1.512877493s to copy over tarball
	I1007 13:43:35.604892  802960 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:43:37.805292  802960 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200363211s)
	I1007 13:43:37.805327  802960 crio.go:469] duration metric: took 2.200488229s to extract the tarball
	I1007 13:43:37.805338  802960 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:43:37.845477  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:37.895532  802960 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:43:37.895562  802960 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:43:37.895574  802960 kubeadm.go:934] updating node { 192.168.61.101 8444 v1.31.1 crio true true} ...
	I1007 13:43:37.895725  802960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-489319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:43:37.895804  802960 ssh_runner.go:195] Run: crio config
	I1007 13:43:37.949367  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:37.949395  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:37.949410  802960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:43:37.949433  802960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.101 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-489319 NodeName:default-k8s-diff-port-489319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:43:37.949576  802960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.101
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-489319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:43:37.949659  802960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:43:37.959941  802960 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:43:37.960076  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:43:37.970766  802960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1007 13:43:37.989311  802960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:43:38.009634  802960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1007 13:43:38.027642  802960 ssh_runner.go:195] Run: grep 192.168.61.101	control-plane.minikube.internal$ /etc/hosts
	I1007 13:43:38.031764  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:38.044131  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:38.185253  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:43:38.212538  802960 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319 for IP: 192.168.61.101
	I1007 13:43:38.212565  802960 certs.go:194] generating shared ca certs ...
	I1007 13:43:38.212589  802960 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:43:38.212799  802960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:43:38.212859  802960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:43:38.212873  802960 certs.go:256] generating profile certs ...
	I1007 13:43:38.212997  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/client.key
	I1007 13:43:38.213082  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key.f1e25377
	I1007 13:43:38.213153  802960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key
	I1007 13:43:38.213325  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:43:38.213365  802960 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:43:38.213390  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:43:38.213425  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:43:38.213471  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:43:38.213501  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:43:38.213559  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:38.214588  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:43:38.266516  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:43:38.305985  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:43:38.353490  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:43:38.380638  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 13:43:38.424440  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:43:38.452428  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:43:38.480709  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:43:38.509639  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:43:38.536940  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:43:38.564021  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:43:38.591067  802960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:43:38.609218  802960 ssh_runner.go:195] Run: openssl version
	I1007 13:43:38.616235  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:43:38.629007  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634324  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634400  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.641330  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:43:38.654384  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:43:38.667134  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672330  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672407  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.678719  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:43:38.690565  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:43:38.705158  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710787  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710868  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.717093  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:43:38.729957  802960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:43:38.735559  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:43:38.742580  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:43:38.749684  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:43:38.756534  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:43:38.762897  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:43:38.770450  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:43:38.777701  802960 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:43:38.777813  802960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:43:38.777880  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.822678  802960 cri.go:89] found id: ""
	I1007 13:43:38.822746  802960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:43:38.833436  802960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:43:38.833463  802960 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:43:38.833516  802960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:43:38.844226  802960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:43:38.845383  802960 kubeconfig.go:125] found "default-k8s-diff-port-489319" server: "https://192.168.61.101:8444"
	I1007 13:43:38.848063  802960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:43:38.859087  802960 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.101
	I1007 13:43:38.859129  802960 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:43:38.859142  802960 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:43:38.859221  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.902955  802960 cri.go:89] found id: ""
	I1007 13:43:38.903054  802960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:43:38.920556  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:43:38.930998  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:43:38.931027  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:43:38.931095  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:43:38.940538  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:43:38.940608  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:43:38.951198  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:43:38.960653  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:43:38.960746  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:43:38.970800  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.981094  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:43:38.981176  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.991845  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:43:39.001966  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:43:39.002080  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:43:39.014014  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:43:39.026304  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:39.157169  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.098491  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.941274215s)
	I1007 13:43:41.098539  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.310925  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.402330  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.502763  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:43:41.502864  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.003197  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:45.194317  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:45.194637  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194670  800812 kubeadm.go:310] 
	I1007 13:43:45.194721  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:43:45.194779  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:43:45.194789  800812 kubeadm.go:310] 
	I1007 13:43:45.194832  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:43:45.194873  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:43:45.195053  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:43:45.195079  800812 kubeadm.go:310] 
	I1007 13:43:45.195219  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:43:45.195259  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:43:45.195300  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:43:45.195309  800812 kubeadm.go:310] 
	I1007 13:43:45.195434  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:43:45.195533  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:43:45.195542  800812 kubeadm.go:310] 
	I1007 13:43:45.195691  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:43:45.195814  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:43:45.195912  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:43:45.196007  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:43:45.196018  800812 kubeadm.go:310] 
	I1007 13:43:45.196865  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:43:45.197021  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:43:45.197130  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:43:45.197242  800812 kubeadm.go:394] duration metric: took 7m57.99434545s to StartCluster
	I1007 13:43:45.197299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:43:45.197368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:43:45.245334  800812 cri.go:89] found id: ""
	I1007 13:43:45.245369  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.245380  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:43:45.245390  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:43:45.245464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:43:45.287324  800812 cri.go:89] found id: ""
	I1007 13:43:45.287363  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.287375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:43:45.287384  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:43:45.287464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:43:45.323565  800812 cri.go:89] found id: ""
	I1007 13:43:45.323606  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.323619  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:43:45.323627  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:43:45.323708  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:43:45.365920  800812 cri.go:89] found id: ""
	I1007 13:43:45.365955  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.365967  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:43:45.365976  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:43:45.366052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:43:45.409136  800812 cri.go:89] found id: ""
	I1007 13:43:45.409177  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.409189  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:43:45.409199  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:43:45.409268  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:43:45.455631  800812 cri.go:89] found id: ""
	I1007 13:43:45.455667  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.455676  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:43:45.455683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:43:45.455746  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:43:45.512092  800812 cri.go:89] found id: ""
	I1007 13:43:45.512134  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.512146  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:43:45.512155  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:43:45.512223  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:43:45.561541  800812 cri.go:89] found id: ""
	I1007 13:43:45.561579  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.561592  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:43:45.561614  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:43:45.561635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:43:45.609728  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:43:45.609765  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:43:45.662962  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:43:45.663007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:43:45.680441  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:43:45.680496  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:43:45.768165  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:43:45.768198  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:43:45.768214  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:43:45.889172  800812 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:43:45.889245  800812 out.go:270] * 
	W1007 13:43:45.889310  800812 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.889324  800812 out.go:270] * 
	W1007 13:43:45.890214  800812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:43:45.893670  800812 out.go:201] 
	W1007 13:43:45.895121  800812 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.895161  800812 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:43:45.895184  800812 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:43:45.896672  800812 out.go:201] 
	I1007 13:43:42.503307  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.523040  802960 api_server.go:72] duration metric: took 1.020293575s to wait for apiserver process to appear ...
	I1007 13:43:42.523069  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:43:42.523093  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:42.523750  802960 api_server.go:269] stopped: https://192.168.61.101:8444/healthz: Get "https://192.168.61.101:8444/healthz": dial tcp 192.168.61.101:8444: connect: connection refused
	I1007 13:43:43.023271  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.500619  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.500651  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.500665  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.544628  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.544688  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.544701  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.643845  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:45.643890  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.023194  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.029635  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.029672  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.523339  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.528709  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.528745  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.023901  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.032151  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:47.032192  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.523593  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.531558  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:43:47.542161  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:43:47.542203  802960 api_server.go:131] duration metric: took 5.019126566s to wait for apiserver health ...
	I1007 13:43:47.542216  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:47.542227  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:47.544352  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:43:47.546075  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:43:47.560213  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:43:47.612380  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:43:47.633953  802960 system_pods.go:59] 8 kube-system pods found
	I1007 13:43:47.634015  802960 system_pods.go:61] "coredns-7c65d6cfc9-4nl8s" [798ab07d-53ab-45f3-9517-a3ea78152fc7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:43:47.634042  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [a3fd82bc-a9b5-4955-b3f8-d88c5bb5951d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:43:47.634058  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [431b750f-f9ca-4e27-a7db-6c758047acf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:43:47.634069  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [0289a6a2-f3b7-43fa-a97c-4464b93c2ecc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:43:47.634081  802960 system_pods.go:61] "kube-proxy-9s9p4" [8aeaf16d-764e-4da5-b27d-1915e33b3f2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 13:43:47.634102  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [4e5878d2-8ceb-4707-b2fd-834fd5f485be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 13:43:47.634114  802960 system_pods.go:61] "metrics-server-6867b74b74-s8v5f" [c498a0f1-ffb8-482d-b6be-ce04d3d6ff85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:43:47.634120  802960 system_pods.go:61] "storage-provisioner" [c7754b45-21b7-4a4e-b21a-11c5e9eae07d] Running
	I1007 13:43:47.634133  802960 system_pods.go:74] duration metric: took 21.726405ms to wait for pod list to return data ...
	I1007 13:43:47.634143  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:43:47.646482  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:43:47.646520  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:43:47.646534  802960 node_conditions.go:105] duration metric: took 12.386071ms to run NodePressure ...
	I1007 13:43:47.646556  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:48.002169  802960 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007151  802960 kubeadm.go:739] kubelet initialised
	I1007 13:43:48.007183  802960 kubeadm.go:740] duration metric: took 4.972433ms waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007211  802960 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:43:48.013961  802960 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:50.020725  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:52.020875  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:53.521602  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.521625  802960 pod_ready.go:82] duration metric: took 5.507628288s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.521637  802960 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529062  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.529090  802960 pod_ready.go:82] duration metric: took 7.446479ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529101  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:55.536129  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:58.036214  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:00.535183  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:02.035543  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.035567  802960 pod_ready.go:82] duration metric: took 8.506460378s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.035578  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040799  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.040823  802960 pod_ready.go:82] duration metric: took 5.237515ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040833  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045410  802960 pod_ready.go:93] pod "kube-proxy-9s9p4" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.045434  802960 pod_ready.go:82] duration metric: took 4.593822ms for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045444  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049665  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.049691  802960 pod_ready.go:82] duration metric: took 4.239058ms for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049701  802960 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:04.056407  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:06.062186  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:08.555372  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:10.556334  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:12.556423  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:14.557939  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:17.055829  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:19.056756  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:21.057049  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:23.058462  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:25.556545  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:27.556661  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:30.057123  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:32.057581  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:34.556797  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:37.055971  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:39.057054  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:41.057194  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:43.555532  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:45.556365  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:47.556508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:50.056070  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:52.056349  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:54.057809  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:56.556012  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:58.556338  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:00.558599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:03.058077  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:05.558375  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:07.558780  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:10.055494  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:12.057085  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:14.557752  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:17.056626  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:19.556724  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:22.057696  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:24.556552  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:27.056861  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:29.057505  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:31.555965  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:33.557729  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:35.557839  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:38.056814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:40.057838  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:42.058324  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:44.557202  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:47.056736  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:49.057871  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:51.556705  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:53.557023  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:55.557080  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:57.557599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:00.057399  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:02.057880  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:04.556689  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:06.557381  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:09.057237  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:11.057328  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:13.556210  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:15.556303  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:17.556994  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:19.557835  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:22.056480  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:24.556325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:26.556600  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:28.556639  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:30.556983  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:33.056142  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:35.057034  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:37.057246  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:39.556678  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:42.056900  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:44.057207  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:46.057325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:48.556417  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:51.056726  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:53.556598  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:55.557245  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:58.058116  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:00.059008  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:02.557074  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:05.056911  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:07.057374  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:09.556185  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:11.556584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:14.056433  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:16.056567  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:18.557584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:21.056484  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:23.056610  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:25.058105  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:27.555814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:29.556605  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:31.557226  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:34.057006  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.556126  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:38.556720  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:40.557339  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.055498  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:45.056400  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:47.056671  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:49.556490  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:52.056617  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:54.556079  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:56.556885  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:59.056725  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:01.560508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.050835  802960 pod_ready.go:82] duration metric: took 4m0.001111748s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	E1007 13:48:02.050883  802960 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:48:02.050910  802960 pod_ready.go:39] duration metric: took 4m14.0436862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:02.050947  802960 kubeadm.go:597] duration metric: took 4m23.217477497s to restartPrimaryControlPlane
	W1007 13:48:02.051112  802960 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:48:02.051179  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:48:28.304486  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.253272533s)
	I1007 13:48:28.304707  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:28.320794  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:48:28.332332  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:48:28.343070  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:48:28.343095  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:48:28.343157  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:48:28.354012  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:48:28.354118  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:48:28.364581  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:48:28.375492  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:48:28.375560  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:48:28.386761  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.396663  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:48:28.396728  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.407316  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:48:28.417872  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:48:28.417938  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:48:28.428569  802960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:48:28.476704  802960 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:48:28.476823  802960 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:48:28.590009  802960 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:48:28.590162  802960 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:48:28.590300  802960 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:48:28.600046  802960 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:48:28.602443  802960 out.go:235]   - Generating certificates and keys ...
	I1007 13:48:28.602559  802960 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:48:28.602623  802960 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:48:28.602711  802960 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:48:28.602790  802960 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:48:28.602884  802960 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:48:28.602931  802960 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:48:28.603008  802960 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:48:28.603118  802960 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:48:28.603256  802960 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:48:28.603372  802960 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:48:28.603429  802960 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:48:28.603498  802960 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:48:28.710739  802960 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:48:28.967010  802960 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:48:29.107742  802960 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:48:29.239779  802960 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:48:29.344572  802960 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:48:29.345301  802960 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:48:29.348025  802960 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:48:29.350415  802960 out.go:235]   - Booting up control plane ...
	I1007 13:48:29.350549  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:48:29.350650  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:48:29.350732  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:48:29.369742  802960 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:48:29.379251  802960 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:48:29.379337  802960 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:48:29.527857  802960 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:48:29.528013  802960 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:48:30.528609  802960 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001343456s
	I1007 13:48:30.528741  802960 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:48:35.532432  802960 kubeadm.go:310] [api-check] The API server is healthy after 5.003996251s
	I1007 13:48:35.548242  802960 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:48:35.569290  802960 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:48:35.607149  802960 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:48:35.607386  802960 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-489319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:48:35.623965  802960 kubeadm.go:310] [bootstrap-token] Using token: 5jqtrt.7avot15frjqa3f3n
	I1007 13:48:35.626327  802960 out.go:235]   - Configuring RBAC rules ...
	I1007 13:48:35.626469  802960 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:48:35.632447  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:48:35.644119  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:48:35.653482  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:48:35.659903  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:48:35.666151  802960 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:48:35.941468  802960 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:48:36.395332  802960 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:48:36.941654  802960 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:48:36.942749  802960 kubeadm.go:310] 
	I1007 13:48:36.942851  802960 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:48:36.942863  802960 kubeadm.go:310] 
	I1007 13:48:36.942955  802960 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:48:36.942966  802960 kubeadm.go:310] 
	I1007 13:48:36.942997  802960 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:48:36.943073  802960 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:48:36.943160  802960 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:48:36.943180  802960 kubeadm.go:310] 
	I1007 13:48:36.943247  802960 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:48:36.943254  802960 kubeadm.go:310] 
	I1007 13:48:36.943300  802960 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:48:36.943310  802960 kubeadm.go:310] 
	I1007 13:48:36.943379  802960 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:48:36.943477  802960 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:48:36.943559  802960 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:48:36.943567  802960 kubeadm.go:310] 
	I1007 13:48:36.943639  802960 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:48:36.943758  802960 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:48:36.943781  802960 kubeadm.go:310] 
	I1007 13:48:36.944023  802960 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944184  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:48:36.944212  802960 kubeadm.go:310] 	--control-plane 
	I1007 13:48:36.944225  802960 kubeadm.go:310] 
	I1007 13:48:36.944328  802960 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:48:36.944341  802960 kubeadm.go:310] 
	I1007 13:48:36.944441  802960 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944564  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:48:36.946569  802960 kubeadm.go:310] W1007 13:48:28.442953    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.946947  802960 kubeadm.go:310] W1007 13:48:28.444068    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.947056  802960 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:48:36.947089  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:48:36.947100  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:48:36.949279  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:48:36.951020  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:48:36.966261  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:48:36.991447  802960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:48:36.991537  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:36.991576  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-489319 minikube.k8s.io/updated_at=2024_10_07T13_48_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=default-k8s-diff-port-489319 minikube.k8s.io/primary=true
	I1007 13:48:37.245837  802960 ops.go:34] apiserver oom_adj: -16
	I1007 13:48:37.253690  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:37.754572  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.254294  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.754766  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.253915  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.754118  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.254526  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.753887  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.254082  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.441338  802960 kubeadm.go:1113] duration metric: took 4.449876263s to wait for elevateKubeSystemPrivileges
	I1007 13:48:41.441397  802960 kubeadm.go:394] duration metric: took 5m2.66370907s to StartCluster
	I1007 13:48:41.441446  802960 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.441564  802960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:48:41.443987  802960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.444365  802960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:48:41.444449  802960 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:48:41.444606  802960 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444633  802960 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444647  802960 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:48:41.444644  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:48:41.444669  802960 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444689  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444696  802960 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444748  802960 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444763  802960 addons.go:243] addon metrics-server should already be in state true
	I1007 13:48:41.444799  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444711  802960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-489319"
	I1007 13:48:41.445223  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445236  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445242  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445285  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445305  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445290  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.446533  802960 out.go:177] * Verifying Kubernetes components...
	I1007 13:48:41.448204  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:48:41.463351  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1007 13:48:41.463547  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I1007 13:48:41.464007  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464024  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464636  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464651  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464667  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.464674  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.465115  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465118  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465331  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.465770  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.465817  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.466630  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I1007 13:48:41.467414  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.468267  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.468293  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.468696  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.469177  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.469225  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.469939  802960 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.469967  802960 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:48:41.470004  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.470429  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.470491  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.485835  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I1007 13:48:41.485934  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I1007 13:48:41.486390  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I1007 13:48:41.486401  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486694  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486850  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.487029  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487048  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487286  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487314  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487375  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.487668  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487692  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487915  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.487940  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488170  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488207  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.488812  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.488866  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.490870  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.491026  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.493370  802960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:48:41.493369  802960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:48:41.495269  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:48:41.495304  802960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:48:41.495335  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.495482  802960 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.495504  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:48:41.495525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.499997  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500173  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500600  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500819  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.501010  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501125  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501279  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501286  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501657  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.501683  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.509460  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1007 13:48:41.510229  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.510898  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.510934  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.511328  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.511540  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.513219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.513712  802960 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.513734  802960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:48:41.513759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.517041  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517439  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.517462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517630  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.517885  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.518121  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.518301  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.674144  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:48:41.742749  802960 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753582  802960 node_ready.go:49] node "default-k8s-diff-port-489319" has status "Ready":"True"
	I1007 13:48:41.753616  802960 node_ready.go:38] duration metric: took 10.764539ms for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753630  802960 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:41.769510  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:41.796357  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.844420  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.871099  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:48:41.871126  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:48:41.978289  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:48:41.978325  802960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:48:42.063366  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.063399  802960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:48:42.204106  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.261831  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.261861  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.262168  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.262192  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.262202  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.262209  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.263023  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.263040  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.285756  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.285786  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.286112  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.286135  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.286145  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044454  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.199980665s)
	I1007 13:48:43.044515  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.044892  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.044910  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.044926  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044934  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044942  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.045192  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.045208  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.045193  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303372  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.099210402s)
	I1007 13:48:43.303432  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303452  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.303783  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.303801  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.303799  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303811  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303821  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.304077  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.304094  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.304107  802960 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-489319"
	I1007 13:48:43.306084  802960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1007 13:48:43.307478  802960 addons.go:510] duration metric: took 1.863046306s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1007 13:48:43.778309  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:45.778814  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:47.775390  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:47.775417  802960 pod_ready.go:82] duration metric: took 6.005863403s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:47.775431  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789544  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.789573  802960 pod_ready.go:82] duration metric: took 1.01413369s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789587  802960 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796239  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.796267  802960 pod_ready.go:82] duration metric: took 6.671875ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796280  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.806996  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.807030  802960 pod_ready.go:82] duration metric: took 10.740949ms for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.807046  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814301  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.814335  802960 pod_ready.go:82] duration metric: took 7.279716ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814350  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976171  802960 pod_ready.go:93] pod "kube-proxy-jpvx5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.976198  802960 pod_ready.go:82] duration metric: took 161.84042ms for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976209  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175024  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:50.175051  802960 pod_ready.go:82] duration metric: took 1.198834555s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175062  802960 pod_ready.go:39] duration metric: took 8.42141844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:50.175094  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:48:50.175154  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:48:50.190906  802960 api_server.go:72] duration metric: took 8.746497817s to wait for apiserver process to appear ...
	I1007 13:48:50.190937  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:48:50.190969  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:48:50.196727  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:48:50.197751  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:48:50.197774  802960 api_server.go:131] duration metric: took 6.829939ms to wait for apiserver health ...
	I1007 13:48:50.197783  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:48:50.378985  802960 system_pods.go:59] 9 kube-system pods found
	I1007 13:48:50.379015  802960 system_pods.go:61] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.379023  802960 system_pods.go:61] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.379029  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.379034  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.379041  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.379045  802960 system_pods.go:61] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.379050  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.379059  802960 system_pods.go:61] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.379066  802960 system_pods.go:61] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.379078  802960 system_pods.go:74] duration metric: took 181.288145ms to wait for pod list to return data ...
	I1007 13:48:50.379091  802960 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:48:50.574098  802960 default_sa.go:45] found service account: "default"
	I1007 13:48:50.574127  802960 default_sa.go:55] duration metric: took 195.025343ms for default service account to be created ...
	I1007 13:48:50.574137  802960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:48:50.777201  802960 system_pods.go:86] 9 kube-system pods found
	I1007 13:48:50.777233  802960 system_pods.go:89] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.777238  802960 system_pods.go:89] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.777243  802960 system_pods.go:89] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.777247  802960 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.777252  802960 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.777257  802960 system_pods.go:89] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.777260  802960 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.777269  802960 system_pods.go:89] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.777273  802960 system_pods.go:89] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.777283  802960 system_pods.go:126] duration metric: took 203.138905ms to wait for k8s-apps to be running ...
	I1007 13:48:50.777292  802960 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:48:50.777338  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:50.794312  802960 system_svc.go:56] duration metric: took 17.00771ms WaitForService to wait for kubelet
	I1007 13:48:50.794350  802960 kubeadm.go:582] duration metric: took 9.349947078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:48:50.794376  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:48:50.974457  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:48:50.974484  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:48:50.974507  802960 node_conditions.go:105] duration metric: took 180.125373ms to run NodePressure ...
	I1007 13:48:50.974520  802960 start.go:241] waiting for startup goroutines ...
	I1007 13:48:50.974526  802960 start.go:246] waiting for cluster config update ...
	I1007 13:48:50.974537  802960 start.go:255] writing updated cluster config ...
	I1007 13:48:50.974827  802960 ssh_runner.go:195] Run: rm -f paused
	I1007 13:48:51.030094  802960 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:48:51.032736  802960 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-489319" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.069809095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309026069783114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a46a0990-7242-4b2e-9c46-584ffcb0490b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.070270017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9a21262-28f9-4d81-bd54-33eb6fceeb62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.070325794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9a21262-28f9-4d81-bd54-33eb6fceeb62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.070508943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9a21262-28f9-4d81-bd54-33eb6fceeb62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.109784346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=733a31fe-9b0d-456b-a2b1-5b468daf9dba name=/runtime.v1.RuntimeService/Version
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.109862520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=733a31fe-9b0d-456b-a2b1-5b468daf9dba name=/runtime.v1.RuntimeService/Version
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.111504775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b31689b-ac0e-4493-96ae-b9c2685e5778 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.111862384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309026111840137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b31689b-ac0e-4493-96ae-b9c2685e5778 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.113324489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af2988b5-6222-40ec-8b3c-a8f2f6d8978b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.113383134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af2988b5-6222-40ec-8b3c-a8f2f6d8978b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.113587732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af2988b5-6222-40ec-8b3c-a8f2f6d8978b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.151027075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f99c756c-db85-4d21-bdaa-c2bffdbdc81e name=/runtime.v1.RuntimeService/Version
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.151100545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f99c756c-db85-4d21-bdaa-c2bffdbdc81e name=/runtime.v1.RuntimeService/Version
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.152614029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21c12d0e-99f0-4d4a-ae09-0a5bec5ff374 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.153492419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309026153467286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c12d0e-99f0-4d4a-ae09-0a5bec5ff374 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.153967509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=998158bf-abc2-4959-aa69-260a64184fa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.154019870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=998158bf-abc2-4959-aa69-260a64184fa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.154508145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=998158bf-abc2-4959-aa69-260a64184fa4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.189991641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ee69617-f754-48cf-a983-c7d0976c0d25 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.190176877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ee69617-f754-48cf-a983-c7d0976c0d25 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.191095581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53523c24-a45b-419c-8c5a-bd212259b3d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.191552491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309026191519965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53523c24-a45b-419c-8c5a-bd212259b3d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.191988971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b47f6942-e308-417d-a1e1-79eba4cee6d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.192068323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b47f6942-e308-417d-a1e1-79eba4cee6d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:50:26 no-preload-016701 crio[711]: time="2024-10-07 13:50:26.192393867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b47f6942-e308-417d-a1e1-79eba4cee6d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94edaab72692f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   613ca80cd1813       storage-provisioner
	77f4235b3f737       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   260b8ee5c8454       coredns-7c65d6cfc9-qq4hc
	3b49e546c6c3a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   551853bf60c13       kube-proxy-bjqg2
	d0155669cedd9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   2353a0e2ee0b1       coredns-7c65d6cfc9-pdnlq
	caf6629f0f9a5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   c22c6a87ee1d8       kube-scheduler-no-preload-016701
	2b732f5571fae       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   2a30d99100842       kube-controller-manager-no-preload-016701
	2fc99bea0fa86       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   fa87f639f0782       etcd-no-preload-016701
	abfd5843e8f3f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   bc94d469c673e       kube-apiserver-no-preload-016701
	c94ba6e728b7a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   3891b39502551       kube-apiserver-no-preload-016701
	
	
	==> coredns [77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-016701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-016701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=no-preload-016701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:41:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-016701
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:46:30 +0000   Mon, 07 Oct 2024 13:41:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:46:30 +0000   Mon, 07 Oct 2024 13:41:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:46:30 +0000   Mon, 07 Oct 2024 13:41:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:46:30 +0000   Mon, 07 Oct 2024 13:41:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    no-preload-016701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2608db7ca5142dda5055018b77ff816
	  System UUID:                a2608db7-ca51-42dd-a505-5018b77ff816
	  Boot ID:                    a7bb47b5-1411-4ce0-b484-4aa4ef503a72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-pdnlq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 coredns-7c65d6cfc9-qq4hc                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 etcd-no-preload-016701                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-no-preload-016701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-no-preload-016701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-bjqg2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-no-preload-016701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-s7qkh              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m19s (x8 over 9m19s)  kubelet          Node no-preload-016701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x8 over 9m19s)  kubelet          Node no-preload-016701 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x7 over 9m19s)  kubelet          Node no-preload-016701 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s                  kubelet          Node no-preload-016701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s                  kubelet          Node no-preload-016701 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s                  kubelet          Node no-preload-016701 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-016701 event: Registered Node no-preload-016701 in Controller
	
	
	==> dmesg <==
	[  +0.066356] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051185] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.480495] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.873885] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.623457] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.662699] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.067510] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077968] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.205696] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.162708] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.374861] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[Oct 7 13:36] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.068683] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.593849] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +4.590762] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.039901] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 7 13:41] systemd-fstab-generator[3005]: Ignoring "noauto" option for root device
	[  +0.060474] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.994875] systemd-fstab-generator[3326]: Ignoring "noauto" option for root device
	[  +0.077415] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.862627] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.631708] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.619775] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc] <==
	{"level":"info","ts":"2024-10-07T13:41:08.361871Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.197:2380"}
	{"level":"info","ts":"2024-10-07T13:41:08.361905Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.197:2380"}
	{"level":"info","ts":"2024-10-07T13:41:08.361815Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T13:41:08.364580Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6a8c9de3121f6040","initial-advertise-peer-urls":["https://192.168.39.197:2380"],"listen-peer-urls":["https://192.168.39.197:2380"],"advertise-client-urls":["https://192.168.39.197:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.197:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T13:41:08.364640Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T13:41:09.125248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T13:41:09.125309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T13:41:09.125332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 received MsgPreVoteResp from 6a8c9de3121f6040 at term 1"}
	{"level":"info","ts":"2024-10-07T13:41:09.125343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.125359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 received MsgVoteResp from 6a8c9de3121f6040 at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.125368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.125375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a8c9de3121f6040 elected leader 6a8c9de3121f6040 at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.129314Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:41:09.133439Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6a8c9de3121f6040","local-member-attributes":"{Name:no-preload-016701 ClientURLs:[https://192.168.39.197:2379]}","request-path":"/0/members/6a8c9de3121f6040/attributes","cluster-id":"7da2d91c76c1be47","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T13:41:09.133497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:41:09.133933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:41:09.134658Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T13:41:09.141527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T13:41:09.149231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T13:41:09.149311Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T13:41:09.149453Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T13:41:09.152338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.197:2379"}
	{"level":"info","ts":"2024-10-07T13:41:09.152482Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7da2d91c76c1be47","local-member-id":"6a8c9de3121f6040","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:41:09.152568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:41:09.152617Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 13:50:26 up 14 min,  0 users,  load average: 0.24, 0.14, 0.10
	Linux no-preload-016701 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350] <==
	W1007 13:46:11.892039       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:46:11.892099       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:46:11.893206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:46:11.893273       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:47:11.894170       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:47:11.894265       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1007 13:47:11.894305       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:47:11.894319       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1007 13:47:11.895473       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:47:11.895537       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:49:11.895679       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:49:11.895845       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1007 13:49:11.896213       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:49:11.896309       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1007 13:49:11.897556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:49:11.897712       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0] <==
	W1007 13:40:59.574347       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.695446       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.697968       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.777385       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.815761       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.952546       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.959361       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:00.001206       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:00.096443       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:03.533552       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:03.591429       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.007735       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.021500       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.056038       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.179775       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.232797       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.240431       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.260603       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.402427       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.428167       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.500935       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.505520       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.556775       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.602630       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.659326       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae] <==
	E1007 13:45:17.881797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:45:18.335025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:45:47.888940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:45:48.344339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:46:17.897930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:46:18.353082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:46:30.341211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-016701"
	E1007 13:46:47.903965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:46:48.363084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:47:10.837637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="225.171µs"
	E1007 13:47:17.910212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:47:18.371585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:47:25.846762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="261.858µs"
	E1007 13:47:47.918338       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:47:48.380572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:48:17.925830       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:48:18.391917       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:48:47.932877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:48:48.401445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:49:17.939794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:49:18.409888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:49:47.947590       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:49:48.418475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:50:17.954603       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:50:18.426842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 13:41:20.397358       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 13:41:20.413806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.197"]
	E1007 13:41:20.415592       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:41:20.517857       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 13:41:20.517896       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 13:41:20.517929       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:41:20.547522       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:41:20.547707       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:41:20.547716       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:41:20.551868       1 config.go:199] "Starting service config controller"
	I1007 13:41:20.551967       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:41:20.552300       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:41:20.552401       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:41:20.555504       1 config.go:328] "Starting node config controller"
	I1007 13:41:20.555569       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:41:20.652826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:41:20.652883       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:41:20.655645       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d] <==
	W1007 13:41:11.738542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:11.738557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.803496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:11.803727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.857746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:41:11.857803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.867872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 13:41:11.867928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.873200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:41:11.873353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.967351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:41:11.967404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.087217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:12.087840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.144413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:12.145236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.145519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:41:12.145607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.149362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:41:12.149440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.254884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:41:12.255162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.453028       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:41:12.453079       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 13:41:14.315519       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:49:13 no-preload-016701 kubelet[3333]: E1007 13:49:13.983680    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308953983067583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:23 no-preload-016701 kubelet[3333]: E1007 13:49:23.985992    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308963985550359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:23 no-preload-016701 kubelet[3333]: E1007 13:49:23.986528    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308963985550359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:25 no-preload-016701 kubelet[3333]: E1007 13:49:25.820265    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:49:33 no-preload-016701 kubelet[3333]: E1007 13:49:33.990204    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308973989510344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:33 no-preload-016701 kubelet[3333]: E1007 13:49:33.990602    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308973989510344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:36 no-preload-016701 kubelet[3333]: E1007 13:49:36.828531    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:49:43 no-preload-016701 kubelet[3333]: E1007 13:49:43.992639    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308983992216352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:43 no-preload-016701 kubelet[3333]: E1007 13:49:43.992681    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308983992216352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:50 no-preload-016701 kubelet[3333]: E1007 13:49:50.819576    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:49:53 no-preload-016701 kubelet[3333]: E1007 13:49:53.995101    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308993994558413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:49:53 no-preload-016701 kubelet[3333]: E1007 13:49:53.995208    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308993994558413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:50:03 no-preload-016701 kubelet[3333]: E1007 13:50:03.821503    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:50:03 no-preload-016701 kubelet[3333]: E1007 13:50:03.997655    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309003996826691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:50:03 no-preload-016701 kubelet[3333]: E1007 13:50:03.997758    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309003996826691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:50:13 no-preload-016701 kubelet[3333]: E1007 13:50:13.837223    3333 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 13:50:13 no-preload-016701 kubelet[3333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 13:50:13 no-preload-016701 kubelet[3333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 13:50:13 no-preload-016701 kubelet[3333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 13:50:13 no-preload-016701 kubelet[3333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 13:50:14 no-preload-016701 kubelet[3333]: E1007 13:50:14.000218    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309013999556050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:50:14 no-preload-016701 kubelet[3333]: E1007 13:50:14.000309    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309013999556050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:50:14 no-preload-016701 kubelet[3333]: E1007 13:50:14.820294    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:50:24 no-preload-016701 kubelet[3333]: E1007 13:50:24.002355    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309024001872733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:50:24 no-preload-016701 kubelet[3333]: E1007 13:50:24.002765    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309024001872733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670] <==
	I1007 13:41:20.450738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:41:20.467512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:41:20.468708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:41:20.493219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:41:20.493408       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-016701_0c04ee52-62d5-4d48-9f69-736860be3cc8!
	I1007 13:41:20.497597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c02c4af-e407-4666-a147-f0763dc9f6d3", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-016701_0c04ee52-62d5-4d48-9f69-736860be3cc8 became leader
	I1007 13:41:20.594321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-016701_0c04ee52-62d5-4d48-9f69-736860be3cc8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016701 -n no-preload-016701
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-016701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-s7qkh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-016701 describe pod metrics-server-6867b74b74-s7qkh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-016701 describe pod metrics-server-6867b74b74-s7qkh: exit status 1 (68.551197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-s7qkh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-016701 describe pod metrics-server-6867b74b74-s7qkh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
E1007 13:44:53.449027  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
E1007 13:45:13.698296  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
E1007 13:49:53.448995  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
E1007 13:50:13.698987  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (243.695703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-120978" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (244.824652ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-120978 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:26 UTC |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-016701             | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-653322            | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-120978        | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC | 07 Oct 24 13:48 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:38:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:38:32.108474  802960 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:38:32.108648  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108659  802960 out.go:358] Setting ErrFile to fd 2...
	I1007 13:38:32.108665  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108864  802960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:38:32.109477  802960 out.go:352] Setting JSON to false
	I1007 13:38:32.110672  802960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12061,"bootTime":1728296251,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:38:32.110773  802960 start.go:139] virtualization: kvm guest
	I1007 13:38:32.113566  802960 out.go:177] * [default-k8s-diff-port-489319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:38:32.115580  802960 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:38:32.115627  802960 notify.go:220] Checking for updates...
	I1007 13:38:32.118464  802960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:38:32.120173  802960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:38:32.121799  802960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:38:32.123382  802960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:38:32.125020  802960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:38:29.209336  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:31.212514  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:32.126861  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:38:32.127255  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.127337  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.143671  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1007 13:38:32.144158  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.144820  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.144844  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.145206  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.145416  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.145655  802960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:38:32.146010  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.146112  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.161508  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1007 13:38:32.162082  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.162517  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.162541  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.162886  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.163112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.200281  802960 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:38:32.201380  802960 start.go:297] selected driver: kvm2
	I1007 13:38:32.201393  802960 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.201499  802960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:38:32.202260  802960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.202353  802960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:38:32.218742  802960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:38:32.219129  802960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:38:32.219168  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:38:32.219221  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:38:32.219254  802960 start.go:340] cluster config:
	{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.219380  802960 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.222273  802960 out.go:177] * Starting "default-k8s-diff-port-489319" primary control-plane node in "default-k8s-diff-port-489319" cluster
	I1007 13:38:32.223750  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:38:32.223801  802960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:38:32.223816  802960 cache.go:56] Caching tarball of preloaded images
	I1007 13:38:32.223891  802960 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:38:32.223901  802960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:38:32.223997  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:38:32.224208  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:38:32.224280  802960 start.go:364] duration metric: took 38.73µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:38:32.224297  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:38:32.224303  802960 fix.go:54] fixHost starting: 
	I1007 13:38:32.224637  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.224674  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.239368  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I1007 13:38:32.239869  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.240386  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.240409  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.240813  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.241063  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.241228  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:38:32.243196  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Running err=<nil>
	W1007 13:38:32.243217  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:38:32.245881  802960 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-489319" VM ...
	I1007 13:38:30.514797  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:33.014487  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:30.891736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:30.891810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:30.926900  800812 cri.go:89] found id: ""
	I1007 13:38:30.926934  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.926945  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:30.926953  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:30.927020  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:30.962704  800812 cri.go:89] found id: ""
	I1007 13:38:30.962742  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.962760  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:30.962769  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:30.962839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:31.000947  800812 cri.go:89] found id: ""
	I1007 13:38:31.000986  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.000999  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:31.001009  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:31.001079  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:31.040687  800812 cri.go:89] found id: ""
	I1007 13:38:31.040734  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.040743  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:31.040750  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:31.040808  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:31.077841  800812 cri.go:89] found id: ""
	I1007 13:38:31.077872  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.077891  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:31.077900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:31.077975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:31.128590  800812 cri.go:89] found id: ""
	I1007 13:38:31.128625  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.128638  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:31.128736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:31.128947  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:31.170110  800812 cri.go:89] found id: ""
	I1007 13:38:31.170140  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.170149  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:31.170157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:31.170231  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:31.229262  800812 cri.go:89] found id: ""
	I1007 13:38:31.229297  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.229310  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:31.229327  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:31.229343  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:31.281680  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:31.281727  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:31.296076  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:31.296111  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:31.367443  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:31.367468  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:31.367488  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:31.449882  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:31.449933  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:33.993958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:34.007064  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:34.007150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:34.043479  800812 cri.go:89] found id: ""
	I1007 13:38:34.043517  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.043529  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:34.043537  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:34.043609  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:34.080953  800812 cri.go:89] found id: ""
	I1007 13:38:34.081006  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.081019  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:34.081028  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:34.081100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:34.117708  800812 cri.go:89] found id: ""
	I1007 13:38:34.117741  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.117749  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:34.117756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:34.117823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:34.154457  800812 cri.go:89] found id: ""
	I1007 13:38:34.154487  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.154499  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:34.154507  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:34.154586  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:34.192037  800812 cri.go:89] found id: ""
	I1007 13:38:34.192070  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.192080  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:34.192088  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:34.192159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:34.230404  800812 cri.go:89] found id: ""
	I1007 13:38:34.230441  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.230453  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:34.230461  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:34.230529  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:34.266650  800812 cri.go:89] found id: ""
	I1007 13:38:34.266712  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.266726  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:34.266736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:34.266832  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:34.302731  800812 cri.go:89] found id: ""
	I1007 13:38:34.302767  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.302784  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:34.302807  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:34.302828  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:34.377367  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:34.377400  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:34.377417  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:34.453185  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:34.453232  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:34.498235  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:34.498269  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:34.548177  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:34.548224  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:32.247486  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:38:32.247524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.247949  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:38:32.250961  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:38:32.251539  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251823  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:38:32.252088  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252375  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:38:32.252944  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:38:32.253182  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:38:32.253197  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:38:35.122367  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:33.709093  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.709691  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.514611  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:38.014557  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:37.065875  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:37.079772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:37.079868  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:37.115654  800812 cri.go:89] found id: ""
	I1007 13:38:37.115685  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.115696  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:37.115709  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:37.115777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:37.156963  800812 cri.go:89] found id: ""
	I1007 13:38:37.157001  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.157013  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:37.157022  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:37.157080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:37.199210  800812 cri.go:89] found id: ""
	I1007 13:38:37.199243  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.199254  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:37.199263  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:37.199336  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:37.240823  800812 cri.go:89] found id: ""
	I1007 13:38:37.240868  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.240880  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:37.240889  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:37.240958  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:37.289164  800812 cri.go:89] found id: ""
	I1007 13:38:37.289192  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.289202  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:37.289210  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:37.289276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:37.330630  800812 cri.go:89] found id: ""
	I1007 13:38:37.330660  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.330669  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:37.330675  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:37.330731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:37.372401  800812 cri.go:89] found id: ""
	I1007 13:38:37.372431  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.372439  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:37.372446  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:37.372500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:37.413585  800812 cri.go:89] found id: ""
	I1007 13:38:37.413617  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.413625  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:37.413634  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:37.413646  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:37.458433  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:37.458471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:37.512720  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:37.512769  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.527774  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:37.527813  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:37.605381  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:37.605408  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:37.605422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.182809  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:40.196597  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:40.196671  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:40.236687  800812 cri.go:89] found id: ""
	I1007 13:38:40.236726  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.236738  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:40.236746  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:40.236814  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:40.271432  800812 cri.go:89] found id: ""
	I1007 13:38:40.271470  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.271479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:40.271485  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:40.271548  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:40.308972  800812 cri.go:89] found id: ""
	I1007 13:38:40.309014  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.309026  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:40.309044  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:40.309115  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:40.345363  800812 cri.go:89] found id: ""
	I1007 13:38:40.345404  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.345415  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:40.345424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:40.345506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:40.378426  800812 cri.go:89] found id: ""
	I1007 13:38:40.378457  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.378465  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:40.378471  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:40.378525  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:40.415312  800812 cri.go:89] found id: ""
	I1007 13:38:40.415349  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.415370  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:40.415379  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:40.415448  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:40.452679  800812 cri.go:89] found id: ""
	I1007 13:38:40.452715  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.452727  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:40.452735  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:40.452810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:40.490328  800812 cri.go:89] found id: ""
	I1007 13:38:40.490362  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.490371  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:40.490382  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:40.490395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.581489  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:40.581551  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:40.626827  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:40.626865  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:40.680180  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:40.680226  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:40.696284  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:40.696316  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:40.777722  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:38.198306  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:37.710573  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.210415  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.516522  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.013328  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.278317  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:43.292099  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:43.292180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:43.329487  800812 cri.go:89] found id: ""
	I1007 13:38:43.329518  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.329527  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:43.329534  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:43.329593  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:43.367622  800812 cri.go:89] found id: ""
	I1007 13:38:43.367653  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.367665  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:43.367674  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:43.367750  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:43.403439  800812 cri.go:89] found id: ""
	I1007 13:38:43.403477  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.403491  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:43.403499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:43.403577  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:43.442974  800812 cri.go:89] found id: ""
	I1007 13:38:43.443019  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.443029  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:43.443037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:43.443102  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:43.479975  800812 cri.go:89] found id: ""
	I1007 13:38:43.480005  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.480013  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:43.480020  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:43.480091  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:43.521645  800812 cri.go:89] found id: ""
	I1007 13:38:43.521679  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.521695  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:43.521704  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:43.521763  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:43.558574  800812 cri.go:89] found id: ""
	I1007 13:38:43.558605  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.558614  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:43.558620  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:43.558687  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:43.594054  800812 cri.go:89] found id: ""
	I1007 13:38:43.594086  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.594097  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:43.594111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:43.594128  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:43.673587  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:43.673634  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:43.717642  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:43.717673  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:43.771524  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:43.771586  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:43.786726  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:43.786764  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:43.858645  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:44.274468  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:42.709396  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:44.709744  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.711052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:45.015094  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:47.513659  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:49.515994  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.359453  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:46.373401  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:46.373490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:46.414387  800812 cri.go:89] found id: ""
	I1007 13:38:46.414416  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.414425  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:46.414432  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:46.414498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:46.451704  800812 cri.go:89] found id: ""
	I1007 13:38:46.451739  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.451751  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:46.451761  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:46.451822  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:46.487607  800812 cri.go:89] found id: ""
	I1007 13:38:46.487646  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.487657  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:46.487666  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:46.487747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:46.527080  800812 cri.go:89] found id: ""
	I1007 13:38:46.527113  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.527121  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:46.527128  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:46.527182  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:46.565979  800812 cri.go:89] found id: ""
	I1007 13:38:46.566007  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.566016  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:46.566037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:46.566095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:46.604631  800812 cri.go:89] found id: ""
	I1007 13:38:46.604665  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.604674  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:46.604683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:46.604751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:46.643618  800812 cri.go:89] found id: ""
	I1007 13:38:46.643649  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.643660  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:46.643669  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:46.643741  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:46.686777  800812 cri.go:89] found id: ""
	I1007 13:38:46.686812  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.686824  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:46.686837  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:46.686853  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:46.769689  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:46.769749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:46.810903  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:46.810934  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:46.859958  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:46.860007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:46.874867  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:46.874902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:46.945267  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.446436  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:49.460403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:49.460493  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:49.498234  800812 cri.go:89] found id: ""
	I1007 13:38:49.498278  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.498290  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:49.498302  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:49.498376  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:49.539337  800812 cri.go:89] found id: ""
	I1007 13:38:49.539374  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.539386  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:49.539395  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:49.539465  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:49.580365  800812 cri.go:89] found id: ""
	I1007 13:38:49.580404  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.580415  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:49.580424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:49.580498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:49.624591  800812 cri.go:89] found id: ""
	I1007 13:38:49.624627  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.624638  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:49.624652  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:49.624726  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:49.661718  800812 cri.go:89] found id: ""
	I1007 13:38:49.661750  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.661762  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:49.661776  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:49.661846  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:49.698356  800812 cri.go:89] found id: ""
	I1007 13:38:49.698389  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.698402  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:49.698410  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:49.698477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:49.735453  800812 cri.go:89] found id: ""
	I1007 13:38:49.735486  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.735497  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:49.735505  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:49.735578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:49.779530  800812 cri.go:89] found id: ""
	I1007 13:38:49.779558  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.779567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:49.779577  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:49.779593  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:49.794020  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:49.794067  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:49.868060  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.868093  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:49.868110  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:49.946554  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:49.946599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:49.990212  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:49.990251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:47.346399  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:49.208303  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:51.209295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.013939  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:54.514863  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.543303  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:52.559466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:52.559535  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:52.601977  800812 cri.go:89] found id: ""
	I1007 13:38:52.602008  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.602018  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:52.602043  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:52.602104  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:52.640954  800812 cri.go:89] found id: ""
	I1007 13:38:52.640985  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.641005  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:52.641012  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:52.641067  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:52.682075  800812 cri.go:89] found id: ""
	I1007 13:38:52.682105  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.682113  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:52.682119  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:52.682184  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:52.722957  800812 cri.go:89] found id: ""
	I1007 13:38:52.722986  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.722994  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:52.723006  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:52.723062  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:52.764074  800812 cri.go:89] found id: ""
	I1007 13:38:52.764110  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.764122  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:52.764131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:52.764210  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:52.805802  800812 cri.go:89] found id: ""
	I1007 13:38:52.805830  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.805838  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:52.805844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:52.805912  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:52.846116  800812 cri.go:89] found id: ""
	I1007 13:38:52.846148  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.846157  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:52.846164  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:52.846226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:52.888666  800812 cri.go:89] found id: ""
	I1007 13:38:52.888703  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.888719  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:52.888733  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:52.888750  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:52.968131  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:52.968177  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:53.012585  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:53.012624  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:53.066638  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:53.066692  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:53.081227  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:53.081264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:53.156955  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:55.657820  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:55.672261  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:55.672349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:55.713096  800812 cri.go:89] found id: ""
	I1007 13:38:55.713124  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.713135  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:55.713143  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:55.713211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:55.748413  800812 cri.go:89] found id: ""
	I1007 13:38:55.748447  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.748457  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:55.748465  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:55.748534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:55.781376  800812 cri.go:89] found id: ""
	I1007 13:38:55.781412  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.781424  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:55.781433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:55.781502  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:55.817653  800812 cri.go:89] found id: ""
	I1007 13:38:55.817681  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.817690  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:55.817697  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:55.817767  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:55.853133  800812 cri.go:89] found id: ""
	I1007 13:38:55.853166  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.853177  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:55.853185  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:55.853255  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:53.426353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:56.498332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:53.709052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.710245  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:57.014521  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:59.020215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.891659  800812 cri.go:89] found id: ""
	I1007 13:38:55.891691  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.891720  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:55.891730  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:55.891794  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:55.929345  800812 cri.go:89] found id: ""
	I1007 13:38:55.929373  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.929381  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:55.929388  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:55.929461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:55.963379  800812 cri.go:89] found id: ""
	I1007 13:38:55.963410  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.963419  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:55.963428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:55.963444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:56.006795  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:56.006837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:56.060896  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:56.060942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:56.076353  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:56.076394  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:56.157464  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:56.157492  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:56.157510  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.747936  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:58.761415  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:58.761489  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:58.795181  800812 cri.go:89] found id: ""
	I1007 13:38:58.795216  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.795226  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:58.795232  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:58.795291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:58.828749  800812 cri.go:89] found id: ""
	I1007 13:38:58.828785  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.828795  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:58.828802  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:58.828865  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:58.867195  800812 cri.go:89] found id: ""
	I1007 13:38:58.867234  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.867243  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:58.867251  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:58.867311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:58.905348  800812 cri.go:89] found id: ""
	I1007 13:38:58.905387  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.905398  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:58.905407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:58.905477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:58.940553  800812 cri.go:89] found id: ""
	I1007 13:38:58.940626  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.940655  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:58.940667  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:58.940751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:58.976595  800812 cri.go:89] found id: ""
	I1007 13:38:58.976643  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.976652  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:58.976662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:58.976719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:59.014478  800812 cri.go:89] found id: ""
	I1007 13:38:59.014512  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.014521  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:59.014527  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:59.014594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:59.051337  800812 cri.go:89] found id: ""
	I1007 13:38:59.051367  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.051378  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:59.051391  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:59.051408  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:59.091689  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:59.091733  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:59.144431  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:59.144477  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:59.159436  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:59.159471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:59.256248  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:59.256277  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:59.256293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.208916  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:00.210007  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.514807  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:04.015032  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.846247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:01.861309  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:01.861389  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:01.898079  800812 cri.go:89] found id: ""
	I1007 13:39:01.898117  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.898129  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:01.898138  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:01.898211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:01.933905  800812 cri.go:89] found id: ""
	I1007 13:39:01.933940  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.933951  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:01.933960  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:01.934056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:01.970522  800812 cri.go:89] found id: ""
	I1007 13:39:01.970552  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.970563  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:01.970580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:01.970653  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:02.014210  800812 cri.go:89] found id: ""
	I1007 13:39:02.014245  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.014257  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:02.014265  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:02.014329  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:02.052014  800812 cri.go:89] found id: ""
	I1007 13:39:02.052053  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.052065  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:02.052073  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:02.052144  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:02.089966  800812 cri.go:89] found id: ""
	I1007 13:39:02.089998  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.090007  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:02.090014  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:02.090105  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:02.125933  800812 cri.go:89] found id: ""
	I1007 13:39:02.125970  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.125982  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:02.125991  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:02.126092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:02.163348  800812 cri.go:89] found id: ""
	I1007 13:39:02.163381  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.163394  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:02.163405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:02.163422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:02.218311  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:02.218351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:02.233345  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:02.233381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:02.308402  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:02.308425  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:02.308444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:02.387161  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:02.387207  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:04.931535  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:04.954002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:04.954100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:04.994745  800812 cri.go:89] found id: ""
	I1007 13:39:04.994783  800812 logs.go:282] 0 containers: []
	W1007 13:39:04.994795  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:04.994803  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:04.994903  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:05.031041  800812 cri.go:89] found id: ""
	I1007 13:39:05.031070  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.031078  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:05.031085  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:05.031157  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:05.075737  800812 cri.go:89] found id: ""
	I1007 13:39:05.075780  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.075788  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:05.075794  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:05.075849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:05.108984  800812 cri.go:89] found id: ""
	I1007 13:39:05.109019  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.109030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:05.109038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:05.109096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:05.145667  800812 cri.go:89] found id: ""
	I1007 13:39:05.145699  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.145707  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:05.145724  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:05.145780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:05.182742  800812 cri.go:89] found id: ""
	I1007 13:39:05.182772  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.182783  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:05.182791  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:05.182859  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:05.223674  800812 cri.go:89] found id: ""
	I1007 13:39:05.223721  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.223731  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:05.223737  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:05.223802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:05.263516  800812 cri.go:89] found id: ""
	I1007 13:39:05.263555  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.263567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:05.263581  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:05.263599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:05.345447  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:05.345493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:05.386599  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:05.386635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:05.439367  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:05.439410  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:05.455636  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:05.455671  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:05.541166  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:05.618355  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:02.709614  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:05.211295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:06.514215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.515091  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.041406  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:08.056425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:08.056514  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:08.094066  800812 cri.go:89] found id: ""
	I1007 13:39:08.094098  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.094106  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:08.094113  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:08.094180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:08.136241  800812 cri.go:89] found id: ""
	I1007 13:39:08.136277  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.136289  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:08.136297  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:08.136368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:08.176917  800812 cri.go:89] found id: ""
	I1007 13:39:08.176949  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.176958  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:08.176964  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:08.177019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:08.215278  800812 cri.go:89] found id: ""
	I1007 13:39:08.215313  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.215324  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:08.215331  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:08.215386  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:08.256965  800812 cri.go:89] found id: ""
	I1007 13:39:08.257002  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.257014  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:08.257023  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:08.257093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:08.294680  800812 cri.go:89] found id: ""
	I1007 13:39:08.294716  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.294726  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:08.294736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:08.294792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:08.332832  800812 cri.go:89] found id: ""
	I1007 13:39:08.332862  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.332871  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:08.332878  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:08.332931  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:08.369893  800812 cri.go:89] found id: ""
	I1007 13:39:08.369927  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.369939  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:08.369960  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:08.369987  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:08.448286  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:08.448337  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:08.493839  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:08.493877  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:08.549319  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:08.549365  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:08.564175  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:08.564211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:08.636651  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:08.690293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:07.709699  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:10.208983  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.014066  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:13.014936  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.137682  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:11.152844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:11.152934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:11.187265  800812 cri.go:89] found id: ""
	I1007 13:39:11.187301  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.187313  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:11.187322  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:11.187384  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:11.222721  800812 cri.go:89] found id: ""
	I1007 13:39:11.222760  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.222776  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:11.222783  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:11.222842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:11.261731  800812 cri.go:89] found id: ""
	I1007 13:39:11.261765  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.261774  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:11.261781  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:11.261841  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:11.299511  800812 cri.go:89] found id: ""
	I1007 13:39:11.299541  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.299556  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:11.299563  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:11.299615  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:11.338737  800812 cri.go:89] found id: ""
	I1007 13:39:11.338776  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.338787  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:11.338793  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:11.338851  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:11.382231  800812 cri.go:89] found id: ""
	I1007 13:39:11.382267  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.382277  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:11.382284  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:11.382344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:11.436147  800812 cri.go:89] found id: ""
	I1007 13:39:11.436179  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.436188  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:11.436195  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:11.436258  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:11.477332  800812 cri.go:89] found id: ""
	I1007 13:39:11.477367  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.477380  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:11.477392  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:11.477411  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:11.531842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:11.531887  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:11.546074  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:11.546103  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:11.617435  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.617455  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:11.617470  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:11.703173  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:11.703227  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.249507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:14.263655  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:14.263740  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:14.300339  800812 cri.go:89] found id: ""
	I1007 13:39:14.300372  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.300381  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:14.300388  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:14.300441  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:14.338791  800812 cri.go:89] found id: ""
	I1007 13:39:14.338836  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.338849  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:14.338873  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:14.338960  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:14.376537  800812 cri.go:89] found id: ""
	I1007 13:39:14.376570  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.376582  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:14.376590  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:14.376648  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:14.411933  800812 cri.go:89] found id: ""
	I1007 13:39:14.411969  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.411981  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:14.411990  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:14.412057  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:14.449007  800812 cri.go:89] found id: ""
	I1007 13:39:14.449049  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.449060  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:14.449069  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:14.449129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:14.489459  800812 cri.go:89] found id: ""
	I1007 13:39:14.489495  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.489507  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:14.489516  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:14.489575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:14.529717  800812 cri.go:89] found id: ""
	I1007 13:39:14.529747  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.529756  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:14.529765  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:14.529820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:14.566093  800812 cri.go:89] found id: ""
	I1007 13:39:14.566122  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.566129  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:14.566139  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:14.566156  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:14.640009  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:14.640037  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:14.640051  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:14.726151  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:14.726201  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.771158  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:14.771195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:14.824599  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:14.824644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:14.774418  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:12.209697  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:14.710276  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:15.514317  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.514843  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.339940  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:17.361437  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:17.361511  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:17.402518  800812 cri.go:89] found id: ""
	I1007 13:39:17.402555  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.402566  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:17.402575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:17.402645  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:17.454422  800812 cri.go:89] found id: ""
	I1007 13:39:17.454460  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.454472  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:17.454480  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:17.454552  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:17.497017  800812 cri.go:89] found id: ""
	I1007 13:39:17.497049  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.497060  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:17.497070  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:17.497142  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:17.534352  800812 cri.go:89] found id: ""
	I1007 13:39:17.534389  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.534399  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:17.534406  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:17.534461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:17.568185  800812 cri.go:89] found id: ""
	I1007 13:39:17.568216  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.568225  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:17.568232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:17.568291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:17.611138  800812 cri.go:89] found id: ""
	I1007 13:39:17.611171  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.611182  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:17.611191  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:17.611260  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:17.649494  800812 cri.go:89] found id: ""
	I1007 13:39:17.649527  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.649536  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:17.649544  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:17.649604  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:17.690104  800812 cri.go:89] found id: ""
	I1007 13:39:17.690140  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.690153  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:17.690166  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:17.690183  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:17.763419  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:17.763450  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:17.763467  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:17.841000  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:17.841050  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:17.879832  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:17.879862  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:17.932754  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:17.932796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.447864  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:20.462219  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:20.462301  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:20.499833  800812 cri.go:89] found id: ""
	I1007 13:39:20.499870  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.499881  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:20.499889  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:20.499990  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:20.538996  800812 cri.go:89] found id: ""
	I1007 13:39:20.539031  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.539043  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:20.539051  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:20.539132  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:20.575341  800812 cri.go:89] found id: ""
	I1007 13:39:20.575379  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.575391  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:20.575400  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:20.575470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:20.613527  800812 cri.go:89] found id: ""
	I1007 13:39:20.613562  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.613572  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:20.613582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:20.613657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:20.650651  800812 cri.go:89] found id: ""
	I1007 13:39:20.650686  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.650699  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:20.650709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:20.650769  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:20.689122  800812 cri.go:89] found id: ""
	I1007 13:39:20.689151  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.689160  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:20.689166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:20.689220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:20.725242  800812 cri.go:89] found id: ""
	I1007 13:39:20.725275  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.725284  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:20.725290  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:20.725348  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:20.759949  800812 cri.go:89] found id: ""
	I1007 13:39:20.759988  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.760000  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:20.760014  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:20.760028  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:20.804886  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:20.804922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:20.857652  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:20.857700  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.872182  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:20.872215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:39:17.842234  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:17.210309  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:19.210449  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:21.709672  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:20.014047  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:22.014646  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:24.015649  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	W1007 13:39:20.945413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:20.945439  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:20.945455  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:23.521232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:23.537035  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:23.537116  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:23.580100  800812 cri.go:89] found id: ""
	I1007 13:39:23.580141  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.580154  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:23.580162  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:23.580220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:23.622271  800812 cri.go:89] found id: ""
	I1007 13:39:23.622302  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.622313  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:23.622321  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:23.622390  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:23.658290  800812 cri.go:89] found id: ""
	I1007 13:39:23.658320  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.658335  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:23.658341  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:23.658398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:23.696510  800812 cri.go:89] found id: ""
	I1007 13:39:23.696543  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.696555  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:23.696564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:23.696624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:23.732913  800812 cri.go:89] found id: ""
	I1007 13:39:23.732947  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.732967  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:23.732974  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:23.733027  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:23.774502  800812 cri.go:89] found id: ""
	I1007 13:39:23.774540  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.774550  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:23.774557  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:23.774710  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:23.821217  800812 cri.go:89] found id: ""
	I1007 13:39:23.821258  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.821269  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:23.821278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:23.821350  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:23.864327  800812 cri.go:89] found id: ""
	I1007 13:39:23.864361  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.864373  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:23.864386  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:23.864404  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:23.918454  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:23.918505  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:23.933324  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:23.933363  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:24.015858  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:24.015879  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:24.015892  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:24.096557  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:24.096609  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:23.926328  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:26.994313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:24.203346  800212 pod_ready.go:82] duration metric: took 4m0.00074454s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" ...
	E1007 13:39:24.203420  800212 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:39:24.203447  800212 pod_ready.go:39] duration metric: took 4m15.010484686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:39:24.203483  800212 kubeadm.go:597] duration metric: took 4m22.534978235s to restartPrimaryControlPlane
	W1007 13:39:24.203568  800212 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:24.203597  800212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:26.018248  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:28.513858  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:26.638856  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:26.654921  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:26.654989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:26.693714  800812 cri.go:89] found id: ""
	I1007 13:39:26.693747  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.693756  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:26.693764  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:26.693819  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:26.732730  800812 cri.go:89] found id: ""
	I1007 13:39:26.732762  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.732771  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:26.732778  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:26.732837  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:26.774239  800812 cri.go:89] found id: ""
	I1007 13:39:26.774272  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.774281  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:26.774288  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:26.774352  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:26.812547  800812 cri.go:89] found id: ""
	I1007 13:39:26.812597  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.812609  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:26.812619  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:26.812676  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:26.849472  800812 cri.go:89] found id: ""
	I1007 13:39:26.849501  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.849509  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:26.849515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:26.849572  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:26.885935  800812 cri.go:89] found id: ""
	I1007 13:39:26.885965  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.885974  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:26.885981  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:26.886052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:26.920629  800812 cri.go:89] found id: ""
	I1007 13:39:26.920659  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.920668  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:26.920674  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:26.920731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:26.959016  800812 cri.go:89] found id: ""
	I1007 13:39:26.959052  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.959065  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:26.959079  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:26.959095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:27.012308  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:27.012351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:27.027559  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:27.027602  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:27.111043  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:27.111070  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:27.111086  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:27.194428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:27.194476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:29.738163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:29.752869  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:29.752959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:29.791071  800812 cri.go:89] found id: ""
	I1007 13:39:29.791102  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.791111  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:29.791128  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:29.791206  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:29.837148  800812 cri.go:89] found id: ""
	I1007 13:39:29.837194  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.837207  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:29.837217  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:29.837291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:29.874334  800812 cri.go:89] found id: ""
	I1007 13:39:29.874371  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.874383  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:29.874391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:29.874463  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:29.915799  800812 cri.go:89] found id: ""
	I1007 13:39:29.915835  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.915852  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:29.915861  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:29.915923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:29.954557  800812 cri.go:89] found id: ""
	I1007 13:39:29.954589  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.954598  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:29.954604  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:29.954661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:29.990873  800812 cri.go:89] found id: ""
	I1007 13:39:29.990912  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.990925  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:29.990934  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:29.991019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:30.031687  800812 cri.go:89] found id: ""
	I1007 13:39:30.031738  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.031751  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:30.031763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:30.031872  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:30.071524  800812 cri.go:89] found id: ""
	I1007 13:39:30.071565  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.071579  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:30.071594  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:30.071614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:30.085558  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:30.085591  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:30.162897  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:30.162922  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:30.162935  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:30.244979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:30.245029  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:30.285065  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:30.285098  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:30.513894  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:33.013867  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:32.838701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:32.852755  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:32.852839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:32.890012  800812 cri.go:89] found id: ""
	I1007 13:39:32.890067  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.890079  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:32.890088  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:32.890156  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:32.928467  800812 cri.go:89] found id: ""
	I1007 13:39:32.928499  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.928508  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:32.928517  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:32.928578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:32.964908  800812 cri.go:89] found id: ""
	I1007 13:39:32.964944  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.964956  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:32.964965  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:32.965096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:32.999714  800812 cri.go:89] found id: ""
	I1007 13:39:32.999747  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.999773  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:32.999782  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:32.999849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:33.037889  800812 cri.go:89] found id: ""
	I1007 13:39:33.037924  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.037934  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:33.037946  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:33.038015  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:33.076192  800812 cri.go:89] found id: ""
	I1007 13:39:33.076226  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.076234  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:33.076241  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:33.076311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:33.112402  800812 cri.go:89] found id: ""
	I1007 13:39:33.112442  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.112455  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:33.112463  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:33.112527  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:33.151872  800812 cri.go:89] found id: ""
	I1007 13:39:33.151905  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.151916  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:33.151927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:33.151942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:33.203529  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:33.203572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:33.220050  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:33.220097  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:33.304000  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:33.304030  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:33.304046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:33.383979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:33.384038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:33.074393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:36.146280  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:35.015200  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:37.514925  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:35.929247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:35.943624  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:35.943691  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:35.980943  800812 cri.go:89] found id: ""
	I1007 13:39:35.980973  800812 logs.go:282] 0 containers: []
	W1007 13:39:35.980988  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:35.980996  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:35.981068  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:36.021834  800812 cri.go:89] found id: ""
	I1007 13:39:36.021868  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.021876  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:36.021882  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:36.021939  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:36.056651  800812 cri.go:89] found id: ""
	I1007 13:39:36.056687  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.056698  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:36.056706  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:36.056781  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:36.095332  800812 cri.go:89] found id: ""
	I1007 13:39:36.095360  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.095369  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:36.095376  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:36.095433  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:36.141361  800812 cri.go:89] found id: ""
	I1007 13:39:36.141403  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.141416  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:36.141424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:36.141485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:36.179122  800812 cri.go:89] found id: ""
	I1007 13:39:36.179155  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.179165  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:36.179171  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:36.179226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:36.212594  800812 cri.go:89] found id: ""
	I1007 13:39:36.212630  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.212642  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:36.212651  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:36.212723  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:36.253109  800812 cri.go:89] found id: ""
	I1007 13:39:36.253145  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.253156  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:36.253169  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:36.253187  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:36.327696  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:36.327729  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:36.327747  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:36.404687  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:36.404735  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:36.444913  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:36.444955  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:36.497657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:36.497711  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.013791  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:39.027274  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:39.027344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:39.061214  800812 cri.go:89] found id: ""
	I1007 13:39:39.061246  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.061254  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:39.061262  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:39.061323  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:39.096245  800812 cri.go:89] found id: ""
	I1007 13:39:39.096277  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.096288  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:39.096296  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:39.096373  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:39.137152  800812 cri.go:89] found id: ""
	I1007 13:39:39.137192  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.137204  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:39.137212  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:39.137279  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:39.172052  800812 cri.go:89] found id: ""
	I1007 13:39:39.172085  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.172094  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:39.172100  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:39.172161  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:39.208796  800812 cri.go:89] found id: ""
	I1007 13:39:39.208832  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.208843  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:39.208852  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:39.208923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:39.243568  800812 cri.go:89] found id: ""
	I1007 13:39:39.243598  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.243606  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:39.243613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:39.243669  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:39.279168  800812 cri.go:89] found id: ""
	I1007 13:39:39.279201  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.279209  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:39.279216  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:39.279276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:39.321347  800812 cri.go:89] found id: ""
	I1007 13:39:39.321373  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.321382  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:39.321391  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:39.321405  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:39.373936  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:39.373986  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.388225  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:39.388258  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:39.462454  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:39.462482  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:39.462500  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:39.545876  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:39.545931  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:40.015715  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.514458  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.094078  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:42.107800  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:42.107869  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:42.143781  800812 cri.go:89] found id: ""
	I1007 13:39:42.143818  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.143829  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:42.143837  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:42.143913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:42.186434  800812 cri.go:89] found id: ""
	I1007 13:39:42.186468  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.186479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:42.186490  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:42.186562  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:42.221552  800812 cri.go:89] found id: ""
	I1007 13:39:42.221588  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.221599  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:42.221608  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:42.221682  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:42.255536  800812 cri.go:89] found id: ""
	I1007 13:39:42.255574  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.255586  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:42.255593  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:42.255662  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:42.290067  800812 cri.go:89] found id: ""
	I1007 13:39:42.290103  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.290114  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:42.290126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:42.290197  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:42.326182  800812 cri.go:89] found id: ""
	I1007 13:39:42.326215  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.326225  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:42.326232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:42.326287  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:42.360560  800812 cri.go:89] found id: ""
	I1007 13:39:42.360594  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.360606  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:42.360616  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:42.360683  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:42.396242  800812 cri.go:89] found id: ""
	I1007 13:39:42.396270  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.396280  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:42.396291  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:42.396308  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.448101  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:42.448160  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:42.462617  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:42.462648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:42.541262  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:42.541288  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:42.541306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:42.617009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:42.617052  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.157272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:45.171699  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:45.171777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:45.213274  800812 cri.go:89] found id: ""
	I1007 13:39:45.213311  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.213322  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:45.213331  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:45.213393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:45.252304  800812 cri.go:89] found id: ""
	I1007 13:39:45.252339  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.252348  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:45.252355  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:45.252408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:45.289702  800812 cri.go:89] found id: ""
	I1007 13:39:45.289739  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.289751  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:45.289758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:45.289824  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:45.325776  800812 cri.go:89] found id: ""
	I1007 13:39:45.325815  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.325827  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:45.325836  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:45.325909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:45.362636  800812 cri.go:89] found id: ""
	I1007 13:39:45.362672  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.362683  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:45.362692  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:45.362764  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:45.405058  800812 cri.go:89] found id: ""
	I1007 13:39:45.405090  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.405100  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:45.405108  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:45.405174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:45.439752  800812 cri.go:89] found id: ""
	I1007 13:39:45.439783  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.439793  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:45.439802  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:45.439866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:45.476336  800812 cri.go:89] found id: ""
	I1007 13:39:45.476369  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.476377  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:45.476388  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:45.476402  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:45.489707  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:45.489739  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:45.564593  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:45.564626  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:45.564645  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:45.639136  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:45.639184  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.684415  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:45.684458  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.226242  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.298298  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.013741  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:47.014360  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:49.015110  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:48.245534  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:48.260357  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:48.260425  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:48.297561  800812 cri.go:89] found id: ""
	I1007 13:39:48.297591  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.297599  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:48.297606  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:48.297661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:48.332654  800812 cri.go:89] found id: ""
	I1007 13:39:48.332694  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.332705  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:48.332715  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:48.332783  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:48.370775  800812 cri.go:89] found id: ""
	I1007 13:39:48.370818  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.370829  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:48.370837  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:48.370895  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:48.409282  800812 cri.go:89] found id: ""
	I1007 13:39:48.409318  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.409329  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:48.409338  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:48.409415  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:48.448602  800812 cri.go:89] found id: ""
	I1007 13:39:48.448634  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.448642  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:48.448648  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:48.448702  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:48.483527  800812 cri.go:89] found id: ""
	I1007 13:39:48.483556  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.483565  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:48.483572  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:48.483627  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:48.519600  800812 cri.go:89] found id: ""
	I1007 13:39:48.519636  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.519645  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:48.519657  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:48.519725  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:48.559446  800812 cri.go:89] found id: ""
	I1007 13:39:48.559481  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.559493  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:48.559505  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:48.559523  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:48.575824  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:48.575879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:48.660033  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:48.660067  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:48.660083  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:48.738011  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:48.738077  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:48.781399  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:48.781439  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:50.616036  800212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.41240969s)
	I1007 13:39:50.616124  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:50.638334  800212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:50.654214  800212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:50.672345  800212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:50.672370  800212 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:50.672429  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:50.699073  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:50.699139  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:50.711774  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:50.737818  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:50.737885  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:50.749603  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.760893  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:50.760965  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.771572  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:50.781793  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:50.781856  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:50.793541  800212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:50.851411  800212 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:39:50.851486  800212 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:50.967773  800212 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:50.967938  800212 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:50.968105  800212 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:39:50.976935  800212 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:51.378305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:50.979096  800212 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:50.979227  800212 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:50.979291  800212 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:50.979375  800212 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:50.979467  800212 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:50.979560  800212 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:50.979634  800212 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:50.979717  800212 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:50.979789  800212 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:50.979857  800212 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:50.979925  800212 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:50.979959  800212 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:50.980011  800212 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:51.280206  800212 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:51.430988  800212 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:39:51.677074  800212 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:51.867985  800212 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:52.283613  800212 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:52.284108  800212 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:52.288874  800212 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.333296  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:51.346939  800812 kubeadm.go:597] duration metric: took 4m4.08487661s to restartPrimaryControlPlane
	W1007 13:39:51.347039  800812 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:51.347070  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:51.822215  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:51.841443  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:51.854663  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:51.868065  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:51.868079  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:51.868140  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:51.879052  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:51.879133  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:51.889979  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:51.901929  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:51.902007  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:51.912958  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.923420  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:51.923492  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.934307  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:51.944066  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:51.944138  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:51.954170  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:52.028915  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:39:52.028973  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:52.180138  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:52.180312  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:52.180457  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:39:52.377920  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:52.379989  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:52.380160  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:52.380267  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:52.380407  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:52.380499  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:52.380607  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:52.380700  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:52.381700  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:52.382420  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:52.383189  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:52.384091  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:52.384191  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:52.384372  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:52.769185  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:52.870841  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:52.958399  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:53.168169  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:53.192475  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:53.193447  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:53.193519  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:53.355310  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.514892  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.515960  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.358443  800812 out.go:235]   - Booting up control plane ...
	I1007 13:39:53.358593  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:53.365515  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:53.366449  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:53.367325  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:53.369598  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:39:54.454391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:52.290945  800212 out.go:235]   - Booting up control plane ...
	I1007 13:39:52.291058  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:52.291164  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:52.291610  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:52.312059  800212 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:52.318321  800212 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:52.318412  800212 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:52.456671  800212 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:39:52.456802  800212 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:39:52.958340  800212 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.579104ms
	I1007 13:39:52.958484  800212 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:39:57.959379  800212 kubeadm.go:310] [api-check] The API server is healthy after 5.001260012s
	I1007 13:39:57.980499  800212 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:39:57.999006  800212 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:39:58.043754  800212 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:39:58.044050  800212 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-653322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:39:58.062167  800212 kubeadm.go:310] [bootstrap-token] Using token: 72a6vd.dmbcvepur9l2dhmv
	I1007 13:39:58.064163  800212 out.go:235]   - Configuring RBAC rules ...
	I1007 13:39:58.064326  800212 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:39:58.079082  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:39:58.094414  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:39:58.099862  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:39:58.109846  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:39:58.122572  800212 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:39:58.370342  800212 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:39:58.808645  800212 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:39:59.367759  800212 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:39:59.368708  800212 kubeadm.go:310] 
	I1007 13:39:59.368834  800212 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:39:59.368859  800212 kubeadm.go:310] 
	I1007 13:39:59.368976  800212 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:39:59.368991  800212 kubeadm.go:310] 
	I1007 13:39:59.369031  800212 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:39:59.369111  800212 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:39:59.369155  800212 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:39:59.369162  800212 kubeadm.go:310] 
	I1007 13:39:59.369217  800212 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:39:59.369245  800212 kubeadm.go:310] 
	I1007 13:39:59.369317  800212 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:39:59.369329  800212 kubeadm.go:310] 
	I1007 13:39:59.369390  800212 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:39:59.369487  800212 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:39:59.369588  800212 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:39:59.369600  800212 kubeadm.go:310] 
	I1007 13:39:59.369722  800212 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:39:59.369826  800212 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:39:59.369838  800212 kubeadm.go:310] 
	I1007 13:39:59.369960  800212 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370113  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:39:59.370151  800212 kubeadm.go:310] 	--control-plane 
	I1007 13:39:59.370160  800212 kubeadm.go:310] 
	I1007 13:39:59.370302  800212 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:39:59.370331  800212 kubeadm.go:310] 
	I1007 13:39:59.370458  800212 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370592  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:39:59.371701  800212 kubeadm.go:310] W1007 13:39:50.802353    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372082  800212 kubeadm.go:310] W1007 13:39:50.803265    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372217  800212 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:39:59.372252  800212 cni.go:84] Creating CNI manager for ""
	I1007 13:39:59.372266  800212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:39:59.374383  800212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:39:56.015201  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:58.517383  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:00.534326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:59.376063  800212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:39:59.389097  800212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:39:59.409782  800212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:39:59.409864  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:39:59.409859  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-653322 minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=embed-certs-653322 minikube.k8s.io/primary=true
	I1007 13:39:59.451756  800212 ops.go:34] apiserver oom_adj: -16
	I1007 13:39:59.647019  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.147361  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.647505  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.147866  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.647444  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.147271  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.647066  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.147382  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.647825  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.796730  800212 kubeadm.go:1113] duration metric: took 4.386947643s to wait for elevateKubeSystemPrivileges
	I1007 13:40:03.796776  800212 kubeadm.go:394] duration metric: took 5m2.178460784s to StartCluster
	I1007 13:40:03.796802  800212 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.796927  800212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:40:03.800809  800212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.801152  800212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:40:03.801235  800212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:40:03.801341  800212 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-653322"
	I1007 13:40:03.801366  800212 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-653322"
	W1007 13:40:03.801374  800212 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:40:03.801380  800212 addons.go:69] Setting default-storageclass=true in profile "embed-certs-653322"
	I1007 13:40:03.801397  800212 addons.go:69] Setting metrics-server=true in profile "embed-certs-653322"
	I1007 13:40:03.801418  800212 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:40:03.801428  800212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-653322"
	I1007 13:40:03.801442  800212 addons.go:234] Setting addon metrics-server=true in "embed-certs-653322"
	W1007 13:40:03.801452  800212 addons.go:243] addon metrics-server should already be in state true
	I1007 13:40:03.801485  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801411  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801854  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801895  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801901  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.801908  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801937  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.802059  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.803364  800212 out.go:177] * Verifying Kubernetes components...
	I1007 13:40:03.805464  800212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:40:03.820021  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I1007 13:40:03.820297  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1007 13:40:03.820632  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.820812  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.821460  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821482  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.821598  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I1007 13:40:03.821627  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821639  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.822131  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822377  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.822388  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822769  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822823  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.822938  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822990  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.823583  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.823609  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.824011  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.824209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.828672  800212 addons.go:234] Setting addon default-storageclass=true in "embed-certs-653322"
	W1007 13:40:03.828697  800212 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:40:03.828731  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.829118  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.829169  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.839251  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1007 13:40:03.839800  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.840506  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.840533  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.840894  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.841130  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.842660  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1007 13:40:03.843181  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.843235  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.843819  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.843841  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.844191  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.844469  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.845247  800212 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:40:03.846191  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.846688  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:40:03.846712  800212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:40:03.846737  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.847801  800212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:40:01.015857  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.515782  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.849482  800212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:03.849504  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:40:03.849528  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.851923  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852765  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.852798  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852987  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.853209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.853367  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.853482  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.854532  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I1007 13:40:03.854540  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855100  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.855127  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855438  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.855484  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.855836  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.856149  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.856179  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.856258  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.856436  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.856791  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.857523  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.857572  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.873780  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I1007 13:40:03.874162  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.874943  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.874958  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.875358  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.875581  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.877658  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.877924  800212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:03.877940  800212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:40:03.877962  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.881764  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882241  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.882272  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882619  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.882839  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.882999  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.883146  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:04.059493  800212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:40:04.092602  800212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135614  800212 node_ready.go:49] node "embed-certs-653322" has status "Ready":"True"
	I1007 13:40:04.135639  800212 node_ready.go:38] duration metric: took 42.999262ms for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135649  800212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:04.168633  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:04.177323  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:04.206431  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:04.358331  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:40:04.358360  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:40:04.453932  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:40:04.453978  800212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:40:04.543045  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:04.543079  800212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:40:04.628016  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:05.373199  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166722968s)
	I1007 13:40:05.373269  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373286  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373188  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195822413s)
	I1007 13:40:05.373374  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373395  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373726  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373746  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373756  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373764  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373772  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.373786  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373798  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373810  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373819  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.374033  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374019  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374056  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.374077  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374104  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374123  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.449400  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.449435  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.449768  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.449785  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034194  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.406118465s)
	I1007 13:40:06.034270  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034292  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034583  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034603  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034613  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034620  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034852  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:06.034920  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034951  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034967  800212 addons.go:475] Verifying addon metrics-server=true in "embed-certs-653322"
	I1007 13:40:06.036901  800212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:40:03.602357  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:06.038108  800212 addons.go:510] duration metric: took 2.236891318s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:40:06.178973  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:06.015270  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.514554  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.675453  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:10.182593  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.182620  800212 pod_ready.go:82] duration metric: took 6.013956349s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.182630  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189183  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.189216  800212 pod_ready.go:82] duration metric: took 6.578623ms for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189229  800212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195272  800212 pod_ready.go:93] pod "etcd-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.195298  800212 pod_ready.go:82] duration metric: took 6.06024ms for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195308  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203341  800212 pod_ready.go:93] pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.203365  800212 pod_ready.go:82] duration metric: took 8.050464ms for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203375  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209333  800212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.209364  800212 pod_ready.go:82] duration metric: took 5.980877ms for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209377  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573541  800212 pod_ready.go:93] pod "kube-proxy-z9r92" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.573574  800212 pod_ready.go:82] duration metric: took 364.188673ms for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573586  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973294  800212 pod_ready.go:93] pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.973325  800212 pod_ready.go:82] duration metric: took 399.732244ms for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973334  800212 pod_ready.go:39] duration metric: took 6.837673484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:10.973354  800212 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:40:10.973424  800212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:40:10.989629  800212 api_server.go:72] duration metric: took 7.188432004s to wait for apiserver process to appear ...
	I1007 13:40:10.989661  800212 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:40:10.989690  800212 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I1007 13:40:10.994679  800212 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I1007 13:40:10.995855  800212 api_server.go:141] control plane version: v1.31.1
	I1007 13:40:10.995882  800212 api_server.go:131] duration metric: took 6.212413ms to wait for apiserver health ...
	I1007 13:40:10.995894  800212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:40:11.176174  800212 system_pods.go:59] 9 kube-system pods found
	I1007 13:40:11.176207  800212 system_pods.go:61] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.176213  800212 system_pods.go:61] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.176217  800212 system_pods.go:61] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.176221  800212 system_pods.go:61] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.176225  800212 system_pods.go:61] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.176228  800212 system_pods.go:61] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.176231  800212 system_pods.go:61] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.176236  800212 system_pods.go:61] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.176240  800212 system_pods.go:61] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.176251  800212 system_pods.go:74] duration metric: took 180.350309ms to wait for pod list to return data ...
	I1007 13:40:11.176258  800212 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:40:11.374362  800212 default_sa.go:45] found service account: "default"
	I1007 13:40:11.374397  800212 default_sa.go:55] duration metric: took 198.130993ms for default service account to be created ...
	I1007 13:40:11.374410  800212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:40:11.577087  800212 system_pods.go:86] 9 kube-system pods found
	I1007 13:40:11.577124  800212 system_pods.go:89] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.577130  800212 system_pods.go:89] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.577134  800212 system_pods.go:89] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.577138  800212 system_pods.go:89] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.577141  800212 system_pods.go:89] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.577145  800212 system_pods.go:89] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.577149  800212 system_pods.go:89] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.577157  800212 system_pods.go:89] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.577161  800212 system_pods.go:89] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.577171  800212 system_pods.go:126] duration metric: took 202.754732ms to wait for k8s-apps to be running ...
	I1007 13:40:11.577179  800212 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:40:11.577228  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:40:11.595122  800212 system_svc.go:56] duration metric: took 17.926197ms WaitForService to wait for kubelet
	I1007 13:40:11.595159  800212 kubeadm.go:582] duration metric: took 7.793966621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:40:11.595189  800212 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:40:11.774788  800212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:40:11.774819  800212 node_conditions.go:123] node cpu capacity is 2
	I1007 13:40:11.774833  800212 node_conditions.go:105] duration metric: took 179.638486ms to run NodePressure ...
	I1007 13:40:11.774845  800212 start.go:241] waiting for startup goroutines ...
	I1007 13:40:11.774852  800212 start.go:246] waiting for cluster config update ...
	I1007 13:40:11.774862  800212 start.go:255] writing updated cluster config ...
	I1007 13:40:11.775199  800212 ssh_runner.go:195] Run: rm -f paused
	I1007 13:40:11.829109  800212 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:40:11.831389  800212 out.go:177] * Done! kubectl is now configured to use "embed-certs-653322" cluster and "default" namespace by default
	I1007 13:40:09.682305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:11.014595  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:13.514109  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:12.754391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:16.015105  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.513935  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.834414  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.906376  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.015129  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:23.518245  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:26.014981  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:28.513904  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:27.986365  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.058375  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.015269  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.514729  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.370670  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:40:33.371065  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:33.371255  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:36.013424  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.014881  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.507584  800087 pod_ready.go:82] duration metric: took 4m0.000325195s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" ...
	E1007 13:40:38.507633  800087 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:40:38.507657  800087 pod_ready.go:39] duration metric: took 4m14.542185527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:38.507694  800087 kubeadm.go:597] duration metric: took 4m21.215120888s to restartPrimaryControlPlane
	W1007 13:40:38.507784  800087 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:40:38.507824  800087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:40:38.371494  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:38.371681  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:37.138368  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:40.210391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:46.290312  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:48.371961  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:48.372225  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:49.362313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:55.442333  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:58.514279  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:04.757708  800087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.249856079s)
	I1007 13:41:04.757796  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:04.787393  800087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:41:04.805311  800087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:04.819815  800087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:04.819839  800087 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:04.819889  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:04.832607  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:04.832673  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:04.847624  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:04.859808  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:04.859890  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:04.886041  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.896677  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:04.896746  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.906688  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:04.915884  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:04.915965  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:04.925767  800087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:04.981704  800087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:41:04.981799  800087 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:05.104530  800087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:05.104648  800087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:05.104750  800087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:41:05.114782  800087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:05.116948  800087 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:05.117074  800087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:05.117168  800087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:05.117275  800087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:05.117358  800087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:05.117447  800087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:05.117522  800087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:05.117620  800087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:05.117733  800087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:05.117851  800087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:05.117961  800087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:05.118055  800087 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:05.118147  800087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:05.216990  800087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:05.548814  800087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:41:05.921322  800087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:06.206950  800087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:06.412087  800087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:06.412698  800087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:06.415768  800087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:04.598286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:06.418055  800087 out.go:235]   - Booting up control plane ...
	I1007 13:41:06.418195  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:06.419324  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:06.420095  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:06.437974  800087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:06.447497  800087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:06.447580  800087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:06.582080  800087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:41:06.582223  800087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:41:07.583021  800087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001204833s
	I1007 13:41:07.583165  800087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:41:08.372715  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:08.372913  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:07.666427  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:13.085728  800087 kubeadm.go:310] [api-check] The API server is healthy after 5.502732546s
	I1007 13:41:13.105047  800087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:41:13.122083  800087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:41:13.157464  800087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:41:13.157751  800087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-016701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:41:13.176062  800087 kubeadm.go:310] [bootstrap-token] Using token: ott6bx.mfcul37ilsfpftrr
	I1007 13:41:13.177574  800087 out.go:235]   - Configuring RBAC rules ...
	I1007 13:41:13.177739  800087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:41:13.184629  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:41:13.200989  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:41:13.206521  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:41:13.212338  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:41:13.217063  800087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:41:13.493012  800087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:41:13.926154  800087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:41:14.500818  800087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:41:14.500844  800087 kubeadm.go:310] 
	I1007 13:41:14.500894  800087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:41:14.500899  800087 kubeadm.go:310] 
	I1007 13:41:14.500988  800087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:41:14.501001  800087 kubeadm.go:310] 
	I1007 13:41:14.501041  800087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:41:14.501095  800087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:41:14.501196  800087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:41:14.501223  800087 kubeadm.go:310] 
	I1007 13:41:14.501307  800087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:41:14.501316  800087 kubeadm.go:310] 
	I1007 13:41:14.501379  800087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:41:14.501448  800087 kubeadm.go:310] 
	I1007 13:41:14.501533  800087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:41:14.501629  800087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:41:14.501733  800087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:41:14.501750  800087 kubeadm.go:310] 
	I1007 13:41:14.501854  800087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:41:14.501964  800087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:41:14.501973  800087 kubeadm.go:310] 
	I1007 13:41:14.502109  800087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502269  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:41:14.502311  800087 kubeadm.go:310] 	--control-plane 
	I1007 13:41:14.502322  800087 kubeadm.go:310] 
	I1007 13:41:14.502443  800087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:41:14.502453  800087 kubeadm.go:310] 
	I1007 13:41:14.502600  800087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502755  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:41:14.503943  800087 kubeadm.go:310] W1007 13:41:04.948448    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504331  800087 kubeadm.go:310] W1007 13:41:04.949311    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504448  800087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:14.504466  800087 cni.go:84] Creating CNI manager for ""
	I1007 13:41:14.504474  800087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:41:14.506680  800087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:41:14.508369  800087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:41:14.520414  800087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:41:14.544669  800087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:41:14.544833  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:14.544851  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-016701 minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=no-preload-016701 minikube.k8s.io/primary=true
	I1007 13:41:14.772594  800087 ops.go:34] apiserver oom_adj: -16
	I1007 13:41:14.772619  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:13.746372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:16.822393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:15.273211  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:15.772786  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.273580  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.773395  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.272868  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.773484  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.273717  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.405010  800087 kubeadm.go:1113] duration metric: took 3.86025273s to wait for elevateKubeSystemPrivileges
	I1007 13:41:18.405055  800087 kubeadm.go:394] duration metric: took 5m1.164485599s to StartCluster
	I1007 13:41:18.405081  800087 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.405182  800087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:41:18.406935  800087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.407244  800087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.197 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:41:18.407398  800087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:41:18.407513  800087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-016701"
	I1007 13:41:18.407539  800087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-016701"
	W1007 13:41:18.407549  800087 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:41:18.407548  800087 addons.go:69] Setting default-storageclass=true in profile "no-preload-016701"
	I1007 13:41:18.407572  800087 addons.go:69] Setting metrics-server=true in profile "no-preload-016701"
	I1007 13:41:18.407615  800087 addons.go:234] Setting addon metrics-server=true in "no-preload-016701"
	W1007 13:41:18.407721  800087 addons.go:243] addon metrics-server should already be in state true
	I1007 13:41:18.407850  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407591  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407545  800087 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:41:18.407594  800087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-016701"
	I1007 13:41:18.408374  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408387  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408417  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408424  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408447  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408542  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.409406  800087 out.go:177] * Verifying Kubernetes components...
	I1007 13:41:18.411018  800087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:41:18.425614  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I1007 13:41:18.426275  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.426764  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I1007 13:41:18.426926  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.426956  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427308  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.427410  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.427840  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.427862  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427976  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.428024  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.428257  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.428470  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.428478  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I1007 13:41:18.428980  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.429578  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.429605  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.429927  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.430564  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.430602  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.431895  800087 addons.go:234] Setting addon default-storageclass=true in "no-preload-016701"
	W1007 13:41:18.431918  800087 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:41:18.431952  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.432279  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.432319  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.445003  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1007 13:41:18.445514  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.445968  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1007 13:41:18.446101  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.446125  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.446534  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.446580  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.446821  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.447159  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.447187  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.447559  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.447754  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.449595  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.450543  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.452177  800087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:41:18.452788  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I1007 13:41:18.453311  800087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:41:18.453332  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.454421  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.454443  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.454767  800087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.454791  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:41:18.454813  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.454902  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.455260  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:41:18.455277  800087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:41:18.455293  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.455514  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.455574  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.458904  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459133  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459321  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459529  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459681  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459699  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459704  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.459849  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.459962  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459994  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.460161  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.460349  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.460480  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.495484  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1007 13:41:18.496027  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.496790  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.496828  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.497324  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.497537  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.499178  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.499425  800087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.499440  800087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:41:18.499457  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.502808  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503337  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.503363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503573  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.503796  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.503972  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.504135  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.607501  800087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:41:18.631538  800087 node_ready.go:35] waiting up to 6m0s for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645041  800087 node_ready.go:49] node "no-preload-016701" has status "Ready":"True"
	I1007 13:41:18.645065  800087 node_ready.go:38] duration metric: took 13.492405ms for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645076  800087 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:18.651831  800087 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:18.689502  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.714363  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:41:18.714386  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:41:18.738095  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.794344  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:41:18.794384  800087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:41:18.906848  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:18.906886  800087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:41:18.991553  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:19.434333  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434360  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434687  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.434701  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434710  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434716  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434932  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434987  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435004  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.435015  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434993  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435269  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435274  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435282  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.435290  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.435297  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.436889  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.436909  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.456678  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.456714  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.457112  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.457133  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.457164  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.382548  800087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.390945966s)
	I1007 13:41:20.382614  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.382628  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.382952  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383052  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383068  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.383077  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.383010  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.383354  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383370  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383384  800087 addons.go:475] Verifying addon metrics-server=true in "no-preload-016701"
	I1007 13:41:20.385366  800087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:41:20.386603  800087 addons.go:510] duration metric: took 1.979211294s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:41:20.665725  800087 pod_ready.go:103] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"False"
	I1007 13:41:22.158063  800087 pod_ready.go:93] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:22.158090  800087 pod_ready.go:82] duration metric: took 3.506228901s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:22.158100  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165304  800087 pod_ready.go:93] pod "kube-apiserver-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.165330  800087 pod_ready.go:82] duration metric: took 2.007223213s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165340  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172907  800087 pod_ready.go:93] pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.172930  800087 pod_ready.go:82] duration metric: took 7.583143ms for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172939  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180216  800087 pod_ready.go:93] pod "kube-proxy-bjqg2" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.180243  800087 pod_ready.go:82] duration metric: took 7.297732ms for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180255  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185080  800087 pod_ready.go:93] pod "kube-scheduler-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.185108  800087 pod_ready.go:82] duration metric: took 4.84549ms for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185119  800087 pod_ready.go:39] duration metric: took 5.540032302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:24.185141  800087 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:41:24.185197  800087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:41:24.201360  800087 api_server.go:72] duration metric: took 5.794073168s to wait for apiserver process to appear ...
	I1007 13:41:24.201464  800087 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:41:24.201496  800087 api_server.go:253] Checking apiserver healthz at https://192.168.39.197:8443/healthz ...
	I1007 13:41:24.207141  800087 api_server.go:279] https://192.168.39.197:8443/healthz returned 200:
	ok
	I1007 13:41:24.208456  800087 api_server.go:141] control plane version: v1.31.1
	I1007 13:41:24.208481  800087 api_server.go:131] duration metric: took 7.007742ms to wait for apiserver health ...
	I1007 13:41:24.208491  800087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:41:24.213660  800087 system_pods.go:59] 9 kube-system pods found
	I1007 13:41:24.213693  800087 system_pods.go:61] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213701  800087 system_pods.go:61] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213711  800087 system_pods.go:61] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.213716  800087 system_pods.go:61] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.213719  800087 system_pods.go:61] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.213722  800087 system_pods.go:61] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.213725  800087 system_pods.go:61] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.213730  800087 system_pods.go:61] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.213734  800087 system_pods.go:61] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.213742  800087 system_pods.go:74] duration metric: took 5.244677ms to wait for pod list to return data ...
	I1007 13:41:24.213749  800087 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:41:24.216891  800087 default_sa.go:45] found service account: "default"
	I1007 13:41:24.216923  800087 default_sa.go:55] duration metric: took 3.165762ms for default service account to be created ...
	I1007 13:41:24.216936  800087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:41:24.366926  800087 system_pods.go:86] 9 kube-system pods found
	I1007 13:41:24.366962  800087 system_pods.go:89] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366970  800087 system_pods.go:89] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366977  800087 system_pods.go:89] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.366982  800087 system_pods.go:89] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.366986  800087 system_pods.go:89] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.366990  800087 system_pods.go:89] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.366993  800087 system_pods.go:89] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.366998  800087 system_pods.go:89] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.367001  800087 system_pods.go:89] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.367011  800087 system_pods.go:126] duration metric: took 150.068129ms to wait for k8s-apps to be running ...
	I1007 13:41:24.367018  800087 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:41:24.367064  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:24.383197  800087 system_svc.go:56] duration metric: took 16.165166ms WaitForService to wait for kubelet
	I1007 13:41:24.383232  800087 kubeadm.go:582] duration metric: took 5.975954284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:41:24.383256  800087 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:41:24.563433  800087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:41:24.563469  800087 node_conditions.go:123] node cpu capacity is 2
	I1007 13:41:24.563486  800087 node_conditions.go:105] duration metric: took 180.224622ms to run NodePressure ...
	I1007 13:41:24.563503  800087 start.go:241] waiting for startup goroutines ...
	I1007 13:41:24.563514  800087 start.go:246] waiting for cluster config update ...
	I1007 13:41:24.563529  800087 start.go:255] writing updated cluster config ...
	I1007 13:41:24.563898  800087 ssh_runner.go:195] Run: rm -f paused
	I1007 13:41:24.619289  800087 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:41:24.621527  800087 out.go:177] * Done! kubectl is now configured to use "no-preload-016701" cluster and "default" namespace by default
	I1007 13:41:22.898326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:25.970388  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:32.050353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:35.122329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:41.202320  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:44.274335  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:48.374723  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:48.375006  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.375034  800812 kubeadm.go:310] 
	I1007 13:41:48.375075  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:41:48.375132  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:41:48.375140  800812 kubeadm.go:310] 
	I1007 13:41:48.375183  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:41:48.375231  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:41:48.375369  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:41:48.375392  800812 kubeadm.go:310] 
	I1007 13:41:48.375514  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:41:48.375568  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:41:48.375617  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:41:48.375626  800812 kubeadm.go:310] 
	I1007 13:41:48.375747  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:41:48.375877  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:41:48.375895  800812 kubeadm.go:310] 
	I1007 13:41:48.376053  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:41:48.376140  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:41:48.376211  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:41:48.376288  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:41:48.376302  800812 kubeadm.go:310] 
	I1007 13:41:48.376705  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:48.376830  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:41:48.376948  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:41:48.377115  800812 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:41:48.377169  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:41:48.848117  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:48.863751  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:48.874610  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:48.874642  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:48.874715  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:48.886201  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:48.886279  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:48.897494  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:48.908398  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:48.908481  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:48.921409  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.931814  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:48.931882  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.943484  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:48.955060  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:48.955245  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:48.966391  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:49.042441  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:41:49.042521  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:49.203488  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:49.203603  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:49.203715  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:41:49.410381  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:49.412411  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:49.412520  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:49.412591  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:49.412694  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:49.412816  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:49.412940  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:49.412999  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:49.413053  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:49.413105  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:49.413196  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:49.413283  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:49.413319  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:49.413396  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:49.634922  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:49.724221  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:49.804768  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:49.980061  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:50.000515  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:50.000858  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:50.001053  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:50.163951  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:50.166163  800812 out.go:235]   - Booting up control plane ...
	I1007 13:41:50.166331  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:50.180837  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:50.181963  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:50.184140  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:50.190548  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:41:50.354360  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:53.426359  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:59.510321  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:02.578322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:08.658292  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:11.730352  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:17.810322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:20.882397  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:26.962343  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:30.192477  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:42:30.192790  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:30.193025  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:30.034345  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:35.193544  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:35.193820  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:36.114353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:39.186453  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:45.194245  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:45.194449  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:45.266293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:48.338329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:54.418332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:57.490294  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:05.194833  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:05.195103  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:03.570372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:06.642286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:09.643253  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:09.643290  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643598  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:09.643627  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:09.645347  802960 machine.go:96] duration metric: took 4m37.397836997s to provisionDockerMachine
	I1007 13:43:09.645389  802960 fix.go:56] duration metric: took 4m37.421085967s for fixHost
	I1007 13:43:09.645394  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 4m37.421104002s
	W1007 13:43:09.645409  802960 start.go:714] error starting host: provision: host is not running
	W1007 13:43:09.645530  802960 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 13:43:09.645542  802960 start.go:729] Will try again in 5 seconds ...
	I1007 13:43:14.646206  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:43:14.646330  802960 start.go:364] duration metric: took 74.211µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:43:14.646374  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:43:14.646382  802960 fix.go:54] fixHost starting: 
	I1007 13:43:14.646717  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:43:14.646746  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:43:14.662426  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I1007 13:43:14.663016  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:43:14.663790  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:43:14.663822  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:43:14.664176  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:43:14.664429  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:14.664605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:43:14.666440  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Stopped err=<nil>
	I1007 13:43:14.666467  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	W1007 13:43:14.666648  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:43:14.668507  802960 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-489319" ...
	I1007 13:43:14.669973  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Start
	I1007 13:43:14.670294  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring networks are active...
	I1007 13:43:14.671299  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network default is active
	I1007 13:43:14.671623  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network mk-default-k8s-diff-port-489319 is active
	I1007 13:43:14.672332  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Getting domain xml...
	I1007 13:43:14.673106  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Creating domain...
	I1007 13:43:15.035227  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting to get IP...
	I1007 13:43:15.036226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036673  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036768  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.036657  804186 retry.go:31] will retry after 204.852009ms: waiting for machine to come up
	I1007 13:43:15.243827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244610  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244699  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.244581  804186 retry.go:31] will retry after 334.887784ms: waiting for machine to come up
	I1007 13:43:15.581226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581717  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581747  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.581665  804186 retry.go:31] will retry after 354.992125ms: waiting for machine to come up
	I1007 13:43:15.938078  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938577  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938614  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.938518  804186 retry.go:31] will retry after 592.784389ms: waiting for machine to come up
	I1007 13:43:16.533531  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534103  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534128  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:16.534054  804186 retry.go:31] will retry after 756.034822ms: waiting for machine to come up
	I1007 13:43:17.291995  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292785  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292807  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:17.292736  804186 retry.go:31] will retry after 896.816081ms: waiting for machine to come up
	I1007 13:43:18.191016  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191527  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191560  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:18.191466  804186 retry.go:31] will retry after 1.08609499s: waiting for machine to come up
	I1007 13:43:19.280109  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280537  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280576  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:19.280520  804186 retry.go:31] will retry after 1.392221474s: waiting for machine to come up
	I1007 13:43:20.674622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675071  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675115  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:20.675031  804186 retry.go:31] will retry after 1.78021676s: waiting for machine to come up
	I1007 13:43:22.457647  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458248  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:22.458160  804186 retry.go:31] will retry after 2.117086662s: waiting for machine to come up
	I1007 13:43:24.576838  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577415  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577445  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:24.577364  804186 retry.go:31] will retry after 2.850833043s: waiting for machine to come up
	I1007 13:43:27.432222  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432855  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432882  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:27.432789  804186 retry.go:31] will retry after 3.63047619s: waiting for machine to come up
	I1007 13:43:31.065089  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.065729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Found IP for machine: 192.168.61.101
	I1007 13:43:31.065759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserving static IP address...
	I1007 13:43:31.065782  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has current primary IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.066317  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.066362  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserved static IP address: 192.168.61.101
	I1007 13:43:31.066395  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | skip adding static IP to network mk-default-k8s-diff-port-489319 - found existing host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"}
	I1007 13:43:31.066407  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for SSH to be available...
	I1007 13:43:31.066449  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Getting to WaitForSSH function...
	I1007 13:43:31.068871  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069233  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.069265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH client type: external
	I1007 13:43:31.069398  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa (-rw-------)
	I1007 13:43:31.069451  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:43:31.069466  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | About to run SSH command:
	I1007 13:43:31.069475  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | exit 0
	I1007 13:43:31.194580  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | SSH cmd err, output: <nil>: 
	I1007 13:43:31.195021  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetConfigRaw
	I1007 13:43:31.195801  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.198966  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199324  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.199359  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199635  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:43:31.199893  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:43:31.199919  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:31.200168  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.202444  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202817  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.202849  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202989  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.203185  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203352  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.203683  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.203930  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.203943  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:43:31.307182  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:43:31.307224  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307497  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:31.307525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307722  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.310462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.310835  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.310905  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.311014  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.311192  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311437  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311613  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.311794  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.311969  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.311981  802960 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489319 && echo "default-k8s-diff-port-489319" | sudo tee /etc/hostname
	I1007 13:43:31.436251  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489319
	
	I1007 13:43:31.436288  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.439927  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440241  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.440276  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440616  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.440887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441042  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441197  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.441360  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.441584  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.441612  802960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489319/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:43:31.552909  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:31.552947  802960 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:43:31.552983  802960 buildroot.go:174] setting up certificates
	I1007 13:43:31.553002  802960 provision.go:84] configureAuth start
	I1007 13:43:31.553012  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.553454  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.556642  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557015  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.557055  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.559909  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560460  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.560487  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560719  802960 provision.go:143] copyHostCerts
	I1007 13:43:31.560792  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:43:31.560812  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:43:31.560889  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:43:31.561045  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:43:31.561058  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:43:31.561084  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:43:31.561171  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:43:31.561180  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:43:31.561208  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:43:31.561271  802960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489319 san=[127.0.0.1 192.168.61.101 default-k8s-diff-port-489319 localhost minikube]
	I1007 13:43:31.871377  802960 provision.go:177] copyRemoteCerts
	I1007 13:43:31.871459  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:43:31.871489  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.874464  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.874887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.874925  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.875112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.875368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.875547  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.875675  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:31.957423  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:43:31.988554  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1007 13:43:32.018470  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:43:32.046799  802960 provision.go:87] duration metric: took 493.782862ms to configureAuth
	I1007 13:43:32.046830  802960 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:43:32.047021  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:43:32.047151  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.050313  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.050727  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.050760  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.051011  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.051216  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051385  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051522  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.051685  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.051878  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.051893  802960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:43:32.291927  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:43:32.291957  802960 machine.go:96] duration metric: took 1.092049658s to provisionDockerMachine
	I1007 13:43:32.291970  802960 start.go:293] postStartSetup for "default-k8s-diff-port-489319" (driver="kvm2")
	I1007 13:43:32.291985  802960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:43:32.292025  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.292491  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:43:32.292523  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.296195  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296625  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.296660  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296889  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.297104  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.297300  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.297479  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.377749  802960 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:43:32.382419  802960 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:43:32.382459  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:43:32.382557  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:43:32.382663  802960 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:43:32.382767  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:43:32.394059  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:32.422256  802960 start.go:296] duration metric: took 130.264438ms for postStartSetup
	I1007 13:43:32.422310  802960 fix.go:56] duration metric: took 17.775926417s for fixHost
	I1007 13:43:32.422340  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.425739  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.426254  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.426678  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426941  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.427080  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.427294  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.427305  802960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:43:32.531411  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728308612.494637714
	
	I1007 13:43:32.531442  802960 fix.go:216] guest clock: 1728308612.494637714
	I1007 13:43:32.531450  802960 fix.go:229] Guest: 2024-10-07 13:43:32.494637714 +0000 UTC Remote: 2024-10-07 13:43:32.422315329 +0000 UTC m=+300.358475670 (delta=72.322385ms)
	I1007 13:43:32.531474  802960 fix.go:200] guest clock delta is within tolerance: 72.322385ms
	I1007 13:43:32.531480  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 17.885135029s
	I1007 13:43:32.531503  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.531787  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:32.534783  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.535265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535472  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536178  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536404  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536518  802960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:43:32.536581  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.536697  802960 ssh_runner.go:195] Run: cat /version.json
	I1007 13:43:32.536729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.539709  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.539743  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540166  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540202  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540348  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540417  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540598  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540638  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540762  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.540777  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540884  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.540947  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.541089  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.642238  802960 ssh_runner.go:195] Run: systemctl --version
	I1007 13:43:32.649391  802960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:43:32.799266  802960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:43:32.805598  802960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:43:32.805707  802960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:43:32.823518  802960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:43:32.823560  802960 start.go:495] detecting cgroup driver to use...
	I1007 13:43:32.823651  802960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:43:32.842054  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:43:32.858474  802960 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:43:32.858550  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:43:32.873750  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:43:32.889165  802960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:43:33.019729  802960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:43:33.182269  802960 docker.go:233] disabling docker service ...
	I1007 13:43:33.182371  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:43:33.198610  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:43:33.213911  802960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:43:33.343594  802960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:43:33.476026  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:43:33.493130  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:43:33.513584  802960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:43:33.513652  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.525714  802960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:43:33.525816  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.538658  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.551146  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.564914  802960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:43:33.578180  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.590140  802960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.610967  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.624890  802960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:43:33.636736  802960 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:43:33.636825  802960 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:43:33.652573  802960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:43:33.665083  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:33.800780  802960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:43:33.898225  802960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:43:33.898309  802960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:43:33.903209  802960 start.go:563] Will wait 60s for crictl version
	I1007 13:43:33.903269  802960 ssh_runner.go:195] Run: which crictl
	I1007 13:43:33.907326  802960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:43:33.959008  802960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:43:33.959168  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:33.990929  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:34.023756  802960 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:43:34.025496  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:34.028784  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029327  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:34.029360  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029672  802960 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1007 13:43:34.034690  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:34.048101  802960 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:43:34.048259  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:43:34.048325  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:34.086926  802960 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:43:34.087050  802960 ssh_runner.go:195] Run: which lz4
	I1007 13:43:34.091973  802960 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:43:34.096623  802960 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:43:34.096671  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:43:35.604800  802960 crio.go:462] duration metric: took 1.512877493s to copy over tarball
	I1007 13:43:35.604892  802960 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:43:37.805292  802960 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200363211s)
	I1007 13:43:37.805327  802960 crio.go:469] duration metric: took 2.200488229s to extract the tarball
	I1007 13:43:37.805338  802960 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:43:37.845477  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:37.895532  802960 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:43:37.895562  802960 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:43:37.895574  802960 kubeadm.go:934] updating node { 192.168.61.101 8444 v1.31.1 crio true true} ...
	I1007 13:43:37.895725  802960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-489319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:43:37.895804  802960 ssh_runner.go:195] Run: crio config
	I1007 13:43:37.949367  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:37.949395  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:37.949410  802960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:43:37.949433  802960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.101 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-489319 NodeName:default-k8s-diff-port-489319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:43:37.949576  802960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.101
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-489319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:43:37.949659  802960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:43:37.959941  802960 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:43:37.960076  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:43:37.970766  802960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1007 13:43:37.989311  802960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:43:38.009634  802960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1007 13:43:38.027642  802960 ssh_runner.go:195] Run: grep 192.168.61.101	control-plane.minikube.internal$ /etc/hosts
	I1007 13:43:38.031764  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:38.044131  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:38.185253  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:43:38.212538  802960 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319 for IP: 192.168.61.101
	I1007 13:43:38.212565  802960 certs.go:194] generating shared ca certs ...
	I1007 13:43:38.212589  802960 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:43:38.212799  802960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:43:38.212859  802960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:43:38.212873  802960 certs.go:256] generating profile certs ...
	I1007 13:43:38.212997  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/client.key
	I1007 13:43:38.213082  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key.f1e25377
	I1007 13:43:38.213153  802960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key
	I1007 13:43:38.213325  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:43:38.213365  802960 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:43:38.213390  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:43:38.213425  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:43:38.213471  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:43:38.213501  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:43:38.213559  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:38.214588  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:43:38.266516  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:43:38.305985  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:43:38.353490  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:43:38.380638  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 13:43:38.424440  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:43:38.452428  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:43:38.480709  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:43:38.509639  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:43:38.536940  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:43:38.564021  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:43:38.591067  802960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:43:38.609218  802960 ssh_runner.go:195] Run: openssl version
	I1007 13:43:38.616235  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:43:38.629007  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634324  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634400  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.641330  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:43:38.654384  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:43:38.667134  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672330  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672407  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.678719  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:43:38.690565  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:43:38.705158  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710787  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710868  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.717093  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:43:38.729957  802960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:43:38.735559  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:43:38.742580  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:43:38.749684  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:43:38.756534  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:43:38.762897  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:43:38.770450  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:43:38.777701  802960 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:43:38.777813  802960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:43:38.777880  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.822678  802960 cri.go:89] found id: ""
	I1007 13:43:38.822746  802960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:43:38.833436  802960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:43:38.833463  802960 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:43:38.833516  802960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:43:38.844226  802960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:43:38.845383  802960 kubeconfig.go:125] found "default-k8s-diff-port-489319" server: "https://192.168.61.101:8444"
	I1007 13:43:38.848063  802960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:43:38.859087  802960 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.101
	I1007 13:43:38.859129  802960 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:43:38.859142  802960 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:43:38.859221  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.902955  802960 cri.go:89] found id: ""
	I1007 13:43:38.903054  802960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:43:38.920556  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:43:38.930998  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:43:38.931027  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:43:38.931095  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:43:38.940538  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:43:38.940608  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:43:38.951198  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:43:38.960653  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:43:38.960746  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:43:38.970800  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.981094  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:43:38.981176  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.991845  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:43:39.001966  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:43:39.002080  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:43:39.014014  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:43:39.026304  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:39.157169  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.098491  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.941274215s)
	I1007 13:43:41.098539  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.310925  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.402330  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.502763  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:43:41.502864  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.003197  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:45.194317  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:45.194637  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194670  800812 kubeadm.go:310] 
	I1007 13:43:45.194721  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:43:45.194779  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:43:45.194789  800812 kubeadm.go:310] 
	I1007 13:43:45.194832  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:43:45.194873  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:43:45.195053  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:43:45.195079  800812 kubeadm.go:310] 
	I1007 13:43:45.195219  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:43:45.195259  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:43:45.195300  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:43:45.195309  800812 kubeadm.go:310] 
	I1007 13:43:45.195434  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:43:45.195533  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:43:45.195542  800812 kubeadm.go:310] 
	I1007 13:43:45.195691  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:43:45.195814  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:43:45.195912  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:43:45.196007  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:43:45.196018  800812 kubeadm.go:310] 
	I1007 13:43:45.196865  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:43:45.197021  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:43:45.197130  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:43:45.197242  800812 kubeadm.go:394] duration metric: took 7m57.99434545s to StartCluster
	I1007 13:43:45.197299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:43:45.197368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:43:45.245334  800812 cri.go:89] found id: ""
	I1007 13:43:45.245369  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.245380  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:43:45.245390  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:43:45.245464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:43:45.287324  800812 cri.go:89] found id: ""
	I1007 13:43:45.287363  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.287375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:43:45.287384  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:43:45.287464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:43:45.323565  800812 cri.go:89] found id: ""
	I1007 13:43:45.323606  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.323619  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:43:45.323627  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:43:45.323708  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:43:45.365920  800812 cri.go:89] found id: ""
	I1007 13:43:45.365955  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.365967  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:43:45.365976  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:43:45.366052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:43:45.409136  800812 cri.go:89] found id: ""
	I1007 13:43:45.409177  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.409189  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:43:45.409199  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:43:45.409268  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:43:45.455631  800812 cri.go:89] found id: ""
	I1007 13:43:45.455667  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.455676  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:43:45.455683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:43:45.455746  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:43:45.512092  800812 cri.go:89] found id: ""
	I1007 13:43:45.512134  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.512146  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:43:45.512155  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:43:45.512223  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:43:45.561541  800812 cri.go:89] found id: ""
	I1007 13:43:45.561579  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.561592  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:43:45.561614  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:43:45.561635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:43:45.609728  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:43:45.609765  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:43:45.662962  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:43:45.663007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:43:45.680441  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:43:45.680496  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:43:45.768165  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:43:45.768198  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:43:45.768214  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:43:45.889172  800812 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:43:45.889245  800812 out.go:270] * 
	W1007 13:43:45.889310  800812 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.889324  800812 out.go:270] * 
	W1007 13:43:45.890214  800812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:43:45.893670  800812 out.go:201] 
	W1007 13:43:45.895121  800812 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.895161  800812 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:43:45.895184  800812 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:43:45.896672  800812 out.go:201] 
	I1007 13:43:42.503307  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.523040  802960 api_server.go:72] duration metric: took 1.020293575s to wait for apiserver process to appear ...
	I1007 13:43:42.523069  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:43:42.523093  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:42.523750  802960 api_server.go:269] stopped: https://192.168.61.101:8444/healthz: Get "https://192.168.61.101:8444/healthz": dial tcp 192.168.61.101:8444: connect: connection refused
	I1007 13:43:43.023271  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.500619  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.500651  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.500665  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.544628  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.544688  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.544701  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.643845  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:45.643890  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.023194  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.029635  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.029672  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.523339  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.528709  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.528745  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.023901  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.032151  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:47.032192  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.523593  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.531558  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:43:47.542161  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:43:47.542203  802960 api_server.go:131] duration metric: took 5.019126566s to wait for apiserver health ...
	I1007 13:43:47.542216  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:47.542227  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:47.544352  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:43:47.546075  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:43:47.560213  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:43:47.612380  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:43:47.633953  802960 system_pods.go:59] 8 kube-system pods found
	I1007 13:43:47.634015  802960 system_pods.go:61] "coredns-7c65d6cfc9-4nl8s" [798ab07d-53ab-45f3-9517-a3ea78152fc7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:43:47.634042  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [a3fd82bc-a9b5-4955-b3f8-d88c5bb5951d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:43:47.634058  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [431b750f-f9ca-4e27-a7db-6c758047acf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:43:47.634069  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [0289a6a2-f3b7-43fa-a97c-4464b93c2ecc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:43:47.634081  802960 system_pods.go:61] "kube-proxy-9s9p4" [8aeaf16d-764e-4da5-b27d-1915e33b3f2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 13:43:47.634102  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [4e5878d2-8ceb-4707-b2fd-834fd5f485be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 13:43:47.634114  802960 system_pods.go:61] "metrics-server-6867b74b74-s8v5f" [c498a0f1-ffb8-482d-b6be-ce04d3d6ff85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:43:47.634120  802960 system_pods.go:61] "storage-provisioner" [c7754b45-21b7-4a4e-b21a-11c5e9eae07d] Running
	I1007 13:43:47.634133  802960 system_pods.go:74] duration metric: took 21.726405ms to wait for pod list to return data ...
	I1007 13:43:47.634143  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:43:47.646482  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:43:47.646520  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:43:47.646534  802960 node_conditions.go:105] duration metric: took 12.386071ms to run NodePressure ...
	I1007 13:43:47.646556  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:48.002169  802960 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007151  802960 kubeadm.go:739] kubelet initialised
	I1007 13:43:48.007183  802960 kubeadm.go:740] duration metric: took 4.972433ms waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007211  802960 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:43:48.013961  802960 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:50.020725  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:52.020875  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:53.521602  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.521625  802960 pod_ready.go:82] duration metric: took 5.507628288s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.521637  802960 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529062  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.529090  802960 pod_ready.go:82] duration metric: took 7.446479ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529101  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:55.536129  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:58.036214  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:00.535183  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:02.035543  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.035567  802960 pod_ready.go:82] duration metric: took 8.506460378s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.035578  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040799  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.040823  802960 pod_ready.go:82] duration metric: took 5.237515ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040833  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045410  802960 pod_ready.go:93] pod "kube-proxy-9s9p4" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.045434  802960 pod_ready.go:82] duration metric: took 4.593822ms for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045444  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049665  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.049691  802960 pod_ready.go:82] duration metric: took 4.239058ms for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049701  802960 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:04.056407  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:06.062186  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:08.555372  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:10.556334  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:12.556423  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:14.557939  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:17.055829  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:19.056756  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:21.057049  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:23.058462  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:25.556545  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:27.556661  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:30.057123  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:32.057581  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:34.556797  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:37.055971  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:39.057054  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:41.057194  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:43.555532  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:45.556365  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:47.556508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:50.056070  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:52.056349  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:54.057809  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:56.556012  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:58.556338  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:00.558599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:03.058077  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:05.558375  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:07.558780  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:10.055494  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:12.057085  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:14.557752  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:17.056626  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:19.556724  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:22.057696  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:24.556552  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:27.056861  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:29.057505  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:31.555965  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:33.557729  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:35.557839  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:38.056814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:40.057838  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:42.058324  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:44.557202  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:47.056736  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:49.057871  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:51.556705  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:53.557023  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:55.557080  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:57.557599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:00.057399  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:02.057880  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:04.556689  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:06.557381  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:09.057237  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:11.057328  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:13.556210  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:15.556303  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:17.556994  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:19.557835  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:22.056480  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:24.556325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:26.556600  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:28.556639  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:30.556983  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:33.056142  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:35.057034  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:37.057246  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:39.556678  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:42.056900  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:44.057207  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:46.057325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:48.556417  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:51.056726  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:53.556598  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:55.557245  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:58.058116  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:00.059008  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:02.557074  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:05.056911  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:07.057374  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:09.556185  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:11.556584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:14.056433  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:16.056567  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:18.557584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:21.056484  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:23.056610  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:25.058105  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:27.555814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:29.556605  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:31.557226  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:34.057006  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.556126  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:38.556720  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:40.557339  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.055498  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:45.056400  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:47.056671  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:49.556490  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:52.056617  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:54.556079  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:56.556885  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:59.056725  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:01.560508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.050835  802960 pod_ready.go:82] duration metric: took 4m0.001111748s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	E1007 13:48:02.050883  802960 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:48:02.050910  802960 pod_ready.go:39] duration metric: took 4m14.0436862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:02.050947  802960 kubeadm.go:597] duration metric: took 4m23.217477497s to restartPrimaryControlPlane
	W1007 13:48:02.051112  802960 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:48:02.051179  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:48:28.304486  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.253272533s)
	I1007 13:48:28.304707  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:28.320794  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:48:28.332332  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:48:28.343070  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:48:28.343095  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:48:28.343157  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:48:28.354012  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:48:28.354118  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:48:28.364581  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:48:28.375492  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:48:28.375560  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:48:28.386761  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.396663  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:48:28.396728  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.407316  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:48:28.417872  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:48:28.417938  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:48:28.428569  802960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:48:28.476704  802960 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:48:28.476823  802960 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:48:28.590009  802960 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:48:28.590162  802960 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:48:28.590300  802960 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:48:28.600046  802960 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:48:28.602443  802960 out.go:235]   - Generating certificates and keys ...
	I1007 13:48:28.602559  802960 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:48:28.602623  802960 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:48:28.602711  802960 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:48:28.602790  802960 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:48:28.602884  802960 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:48:28.602931  802960 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:48:28.603008  802960 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:48:28.603118  802960 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:48:28.603256  802960 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:48:28.603372  802960 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:48:28.603429  802960 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:48:28.603498  802960 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:48:28.710739  802960 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:48:28.967010  802960 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:48:29.107742  802960 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:48:29.239779  802960 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:48:29.344572  802960 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:48:29.345301  802960 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:48:29.348025  802960 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:48:29.350415  802960 out.go:235]   - Booting up control plane ...
	I1007 13:48:29.350549  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:48:29.350650  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:48:29.350732  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:48:29.369742  802960 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:48:29.379251  802960 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:48:29.379337  802960 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:48:29.527857  802960 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:48:29.528013  802960 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:48:30.528609  802960 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001343456s
	I1007 13:48:30.528741  802960 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:48:35.532432  802960 kubeadm.go:310] [api-check] The API server is healthy after 5.003996251s
	I1007 13:48:35.548242  802960 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:48:35.569290  802960 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:48:35.607149  802960 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:48:35.607386  802960 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-489319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:48:35.623965  802960 kubeadm.go:310] [bootstrap-token] Using token: 5jqtrt.7avot15frjqa3f3n
	I1007 13:48:35.626327  802960 out.go:235]   - Configuring RBAC rules ...
	I1007 13:48:35.626469  802960 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:48:35.632447  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:48:35.644119  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:48:35.653482  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:48:35.659903  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:48:35.666151  802960 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:48:35.941468  802960 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:48:36.395332  802960 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:48:36.941654  802960 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:48:36.942749  802960 kubeadm.go:310] 
	I1007 13:48:36.942851  802960 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:48:36.942863  802960 kubeadm.go:310] 
	I1007 13:48:36.942955  802960 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:48:36.942966  802960 kubeadm.go:310] 
	I1007 13:48:36.942997  802960 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:48:36.943073  802960 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:48:36.943160  802960 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:48:36.943180  802960 kubeadm.go:310] 
	I1007 13:48:36.943247  802960 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:48:36.943254  802960 kubeadm.go:310] 
	I1007 13:48:36.943300  802960 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:48:36.943310  802960 kubeadm.go:310] 
	I1007 13:48:36.943379  802960 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:48:36.943477  802960 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:48:36.943559  802960 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:48:36.943567  802960 kubeadm.go:310] 
	I1007 13:48:36.943639  802960 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:48:36.943758  802960 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:48:36.943781  802960 kubeadm.go:310] 
	I1007 13:48:36.944023  802960 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944184  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:48:36.944212  802960 kubeadm.go:310] 	--control-plane 
	I1007 13:48:36.944225  802960 kubeadm.go:310] 
	I1007 13:48:36.944328  802960 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:48:36.944341  802960 kubeadm.go:310] 
	I1007 13:48:36.944441  802960 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944564  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:48:36.946569  802960 kubeadm.go:310] W1007 13:48:28.442953    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.946947  802960 kubeadm.go:310] W1007 13:48:28.444068    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.947056  802960 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:48:36.947089  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:48:36.947100  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:48:36.949279  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:48:36.951020  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:48:36.966261  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:48:36.991447  802960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:48:36.991537  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:36.991576  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-489319 minikube.k8s.io/updated_at=2024_10_07T13_48_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=default-k8s-diff-port-489319 minikube.k8s.io/primary=true
	I1007 13:48:37.245837  802960 ops.go:34] apiserver oom_adj: -16
	I1007 13:48:37.253690  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:37.754572  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.254294  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.754766  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.253915  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.754118  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.254526  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.753887  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.254082  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.441338  802960 kubeadm.go:1113] duration metric: took 4.449876263s to wait for elevateKubeSystemPrivileges
	I1007 13:48:41.441397  802960 kubeadm.go:394] duration metric: took 5m2.66370907s to StartCluster
	I1007 13:48:41.441446  802960 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.441564  802960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:48:41.443987  802960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.444365  802960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:48:41.444449  802960 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:48:41.444606  802960 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444633  802960 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444647  802960 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:48:41.444644  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:48:41.444669  802960 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444689  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444696  802960 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444748  802960 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444763  802960 addons.go:243] addon metrics-server should already be in state true
	I1007 13:48:41.444799  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444711  802960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-489319"
	I1007 13:48:41.445223  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445236  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445242  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445285  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445305  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445290  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.446533  802960 out.go:177] * Verifying Kubernetes components...
	I1007 13:48:41.448204  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:48:41.463351  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1007 13:48:41.463547  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I1007 13:48:41.464007  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464024  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464636  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464651  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464667  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.464674  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.465115  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465118  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465331  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.465770  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.465817  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.466630  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I1007 13:48:41.467414  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.468267  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.468293  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.468696  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.469177  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.469225  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.469939  802960 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.469967  802960 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:48:41.470004  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.470429  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.470491  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.485835  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I1007 13:48:41.485934  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I1007 13:48:41.486390  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I1007 13:48:41.486401  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486694  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486850  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.487029  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487048  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487286  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487314  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487375  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.487668  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487692  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487915  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.487940  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488170  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488207  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.488812  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.488866  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.490870  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.491026  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.493370  802960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:48:41.493369  802960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:48:41.495269  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:48:41.495304  802960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:48:41.495335  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.495482  802960 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.495504  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:48:41.495525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.499997  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500173  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500600  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500819  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.501010  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501125  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501279  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501286  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501657  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.501683  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.509460  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1007 13:48:41.510229  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.510898  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.510934  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.511328  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.511540  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.513219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.513712  802960 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.513734  802960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:48:41.513759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.517041  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517439  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.517462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517630  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.517885  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.518121  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.518301  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.674144  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:48:41.742749  802960 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753582  802960 node_ready.go:49] node "default-k8s-diff-port-489319" has status "Ready":"True"
	I1007 13:48:41.753616  802960 node_ready.go:38] duration metric: took 10.764539ms for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753630  802960 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:41.769510  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:41.796357  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.844420  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.871099  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:48:41.871126  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:48:41.978289  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:48:41.978325  802960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:48:42.063366  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.063399  802960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:48:42.204106  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.261831  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.261861  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.262168  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.262192  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.262202  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.262209  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.263023  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.263040  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.285756  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.285786  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.286112  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.286135  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.286145  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044454  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.199980665s)
	I1007 13:48:43.044515  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.044892  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.044910  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.044926  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044934  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044942  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.045192  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.045208  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.045193  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303372  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.099210402s)
	I1007 13:48:43.303432  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303452  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.303783  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.303801  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.303799  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303811  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303821  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.304077  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.304094  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.304107  802960 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-489319"
	I1007 13:48:43.306084  802960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1007 13:48:43.307478  802960 addons.go:510] duration metric: took 1.863046306s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1007 13:48:43.778309  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:45.778814  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:47.775390  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:47.775417  802960 pod_ready.go:82] duration metric: took 6.005863403s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:47.775431  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789544  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.789573  802960 pod_ready.go:82] duration metric: took 1.01413369s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789587  802960 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796239  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.796267  802960 pod_ready.go:82] duration metric: took 6.671875ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796280  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.806996  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.807030  802960 pod_ready.go:82] duration metric: took 10.740949ms for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.807046  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814301  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.814335  802960 pod_ready.go:82] duration metric: took 7.279716ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814350  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976171  802960 pod_ready.go:93] pod "kube-proxy-jpvx5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.976198  802960 pod_ready.go:82] duration metric: took 161.84042ms for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976209  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175024  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:50.175051  802960 pod_ready.go:82] duration metric: took 1.198834555s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175062  802960 pod_ready.go:39] duration metric: took 8.42141844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:50.175094  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:48:50.175154  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:48:50.190906  802960 api_server.go:72] duration metric: took 8.746497817s to wait for apiserver process to appear ...
	I1007 13:48:50.190937  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:48:50.190969  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:48:50.196727  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:48:50.197751  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:48:50.197774  802960 api_server.go:131] duration metric: took 6.829939ms to wait for apiserver health ...
	I1007 13:48:50.197783  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:48:50.378985  802960 system_pods.go:59] 9 kube-system pods found
	I1007 13:48:50.379015  802960 system_pods.go:61] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.379023  802960 system_pods.go:61] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.379029  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.379034  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.379041  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.379045  802960 system_pods.go:61] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.379050  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.379059  802960 system_pods.go:61] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.379066  802960 system_pods.go:61] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.379078  802960 system_pods.go:74] duration metric: took 181.288145ms to wait for pod list to return data ...
	I1007 13:48:50.379091  802960 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:48:50.574098  802960 default_sa.go:45] found service account: "default"
	I1007 13:48:50.574127  802960 default_sa.go:55] duration metric: took 195.025343ms for default service account to be created ...
	I1007 13:48:50.574137  802960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:48:50.777201  802960 system_pods.go:86] 9 kube-system pods found
	I1007 13:48:50.777233  802960 system_pods.go:89] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.777238  802960 system_pods.go:89] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.777243  802960 system_pods.go:89] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.777247  802960 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.777252  802960 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.777257  802960 system_pods.go:89] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.777260  802960 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.777269  802960 system_pods.go:89] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.777273  802960 system_pods.go:89] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.777283  802960 system_pods.go:126] duration metric: took 203.138905ms to wait for k8s-apps to be running ...
	I1007 13:48:50.777292  802960 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:48:50.777338  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:50.794312  802960 system_svc.go:56] duration metric: took 17.00771ms WaitForService to wait for kubelet
	I1007 13:48:50.794350  802960 kubeadm.go:582] duration metric: took 9.349947078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:48:50.794376  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:48:50.974457  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:48:50.974484  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:48:50.974507  802960 node_conditions.go:105] duration metric: took 180.125373ms to run NodePressure ...
	I1007 13:48:50.974520  802960 start.go:241] waiting for startup goroutines ...
	I1007 13:48:50.974526  802960 start.go:246] waiting for cluster config update ...
	I1007 13:48:50.974537  802960 start.go:255] writing updated cluster config ...
	I1007 13:48:50.974827  802960 ssh_runner.go:195] Run: rm -f paused
	I1007 13:48:51.030094  802960 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:48:51.032736  802960 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-489319" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.086010844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309169085976926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34f3b85e-5ad7-4c32-ba5b-d4cf8d88b294 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.086627870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e53a798-35b3-448b-b827-43c1de2da13a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.086676373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e53a798-35b3-448b-b827-43c1de2da13a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.086728079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7e53a798-35b3-448b-b827-43c1de2da13a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.123880476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2bec0735-5576-437c-ab74-31ba6376d822 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.123990397Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2bec0735-5576-437c-ab74-31ba6376d822 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.125787552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b81220d-4c48-4f99-b012-1d159fe66a56 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.126476472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309169126350632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b81220d-4c48-4f99-b012-1d159fe66a56 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.127540348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ed44e36-2fd9-4fd1-819f-768da6f0f89b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.127627286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ed44e36-2fd9-4fd1-819f-768da6f0f89b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.127680700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3ed44e36-2fd9-4fd1-819f-768da6f0f89b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.163747085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9b3129c-1240-4002-9e33-b3d387f38b0d name=/runtime.v1.RuntimeService/Version
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.163892622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9b3129c-1240-4002-9e33-b3d387f38b0d name=/runtime.v1.RuntimeService/Version
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.165931920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fcc0ee8-b265-4130-aa16-30361d141244 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.166544132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309169166503624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fcc0ee8-b265-4130-aa16-30361d141244 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.167260875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbab3efe-5e9a-4220-b795-269190db658b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.167342332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbab3efe-5e9a-4220-b795-269190db658b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.167381601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cbab3efe-5e9a-4220-b795-269190db658b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.200696670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8d25df8-b33f-4eea-b51e-88e01601a0ee name=/runtime.v1.RuntimeService/Version
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.200771990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8d25df8-b33f-4eea-b51e-88e01601a0ee name=/runtime.v1.RuntimeService/Version
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.202207033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea7358cb-9506-44dd-aa53-f74fc1560971 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.202674503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309169202645790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea7358cb-9506-44dd-aa53-f74fc1560971 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.203470774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=731488bf-eaf9-4013-98db-f3606fbf6bbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.203542198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=731488bf-eaf9-4013-98db-f3606fbf6bbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:52:49 old-k8s-version-120978 crio[632]: time="2024-10-07 13:52:49.203605114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=731488bf-eaf9-4013-98db-f3606fbf6bbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 7 13:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059927] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.123867] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.762449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.678964] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.628433] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.062444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070622] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.220328] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.150806] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.291850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +7.145908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.820671] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[Oct 7 13:36] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 13:39] systemd-fstab-generator[5058]: Ignoring "noauto" option for root device
	[Oct 7 13:41] systemd-fstab-generator[5332]: Ignoring "noauto" option for root device
	[  +0.074388] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:52:49 up 17 min,  0 users,  load average: 0.11, 0.10, 0.05
	Linux old-k8s-version-120978 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: created by k8s.io/kubernetes/pkg/util/config.(*Mux).Channel
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/config/config.go:77 +0x1c6
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: goroutine 147 [runnable]:
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d10a0, 0xc0001100c0)
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: created by k8s.io/kubernetes/pkg/kubelet/config.newSourceApiserverFromLW
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 +0x1e5
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: goroutine 148 [runnable]:
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc000aa2dc0, 0xc0001100c0)
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:368
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/informers.(*sharedInformerFactory).Start
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: goroutine 149 [runnable]:
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d1180, 0xc0001100c0)
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Oct 07 13:52:46 old-k8s-version-120978 kubelet[6514]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Oct 07 13:52:47 old-k8s-version-120978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 07 13:52:47 old-k8s-version-120978 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 07 13:52:47 old-k8s-version-120978 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 07 13:52:47 old-k8s-version-120978 kubelet[6523]: I1007 13:52:47.184974    6523 server.go:416] Version: v1.20.0
	Oct 07 13:52:47 old-k8s-version-120978 kubelet[6523]: I1007 13:52:47.185597    6523 server.go:837] Client rotation is on, will bootstrap in background
	Oct 07 13:52:47 old-k8s-version-120978 kubelet[6523]: I1007 13:52:47.188963    6523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 07 13:52:47 old-k8s-version-120978 kubelet[6523]: W1007 13:52:47.190312    6523 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 07 13:52:47 old-k8s-version-120978 kubelet[6523]: I1007 13:52:47.190827    6523 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (257.633441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-120978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-07 13:57:51.64844764 +0000 UTC m=+6604.846987626
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-489319 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-489319 logs -n 25: (1.919073121s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo journalctl                       | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo docker                           | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo                                  | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo cat                              | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo containerd                       | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo systemctl                        | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo find                             | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-221184 sudo crio                             | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-221184                                       | auto-221184           | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	| start   | -p custom-flannel-221184                             | custom-flannel-221184 | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-221184 pgrep -a                           | kindnet-221184        | jenkins | v1.34.0 | 07 Oct 24 13:57 UTC | 07 Oct 24 13:57 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:56:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:56:47.766098  810702 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:56:47.766471  810702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:56:47.766483  810702 out.go:358] Setting ErrFile to fd 2...
	I1007 13:56:47.766487  810702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:56:47.766678  810702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:56:47.767305  810702 out.go:352] Setting JSON to false
	I1007 13:56:47.768490  810702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13157,"bootTime":1728296251,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:56:47.768564  810702 start.go:139] virtualization: kvm guest
	I1007 13:56:47.771118  810702 out.go:177] * [custom-flannel-221184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:56:47.772455  810702 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:56:47.772463  810702 notify.go:220] Checking for updates...
	I1007 13:56:47.775364  810702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:56:47.776816  810702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:56:47.778215  810702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:56:47.779849  810702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:56:47.781182  810702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:56:47.783206  810702 config.go:182] Loaded profile config "calico-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:56:47.783357  810702 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:56:47.783481  810702 config.go:182] Loaded profile config "kindnet-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:56:47.783613  810702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:56:47.824267  810702 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:56:47.825839  810702 start.go:297] selected driver: kvm2
	I1007 13:56:47.825857  810702 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:56:47.825871  810702 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:56:47.826938  810702 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:56:47.827129  810702 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:56:47.844751  810702 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:56:47.844828  810702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:56:47.845164  810702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:56:47.845221  810702 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1007 13:56:47.845267  810702 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1007 13:56:47.845332  810702 start.go:340] cluster config:
	{Name:custom-flannel-221184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:56:47.845441  810702 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:56:47.847366  810702 out.go:177] * Starting "custom-flannel-221184" primary control-plane node in "custom-flannel-221184" cluster
	I1007 13:56:47.522058  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:56:47.522732  809201 main.go:141] libmachine: (kindnet-221184) DBG | unable to find current IP address of domain kindnet-221184 in network mk-kindnet-221184
	I1007 13:56:47.522757  809201 main.go:141] libmachine: (kindnet-221184) DBG | I1007 13:56:47.522654  809254 retry.go:31] will retry after 2.60094152s: waiting for machine to come up
	I1007 13:56:50.124935  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:56:50.125517  809201 main.go:141] libmachine: (kindnet-221184) DBG | unable to find current IP address of domain kindnet-221184 in network mk-kindnet-221184
	I1007 13:56:50.125541  809201 main.go:141] libmachine: (kindnet-221184) DBG | I1007 13:56:50.125483  809254 retry.go:31] will retry after 3.55871648s: waiting for machine to come up
	I1007 13:56:47.848620  810702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:56:47.848675  810702 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:56:47.848713  810702 cache.go:56] Caching tarball of preloaded images
	I1007 13:56:47.848827  810702 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:56:47.848843  810702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:56:47.848952  810702 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/custom-flannel-221184/config.json ...
	I1007 13:56:47.848978  810702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/custom-flannel-221184/config.json: {Name:mk6d45276513a414c7d5bd09f285dab1768f7a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:47.849157  810702 start.go:360] acquireMachinesLock for custom-flannel-221184: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:56:53.685751  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:56:53.686254  809201 main.go:141] libmachine: (kindnet-221184) DBG | unable to find current IP address of domain kindnet-221184 in network mk-kindnet-221184
	I1007 13:56:53.686276  809201 main.go:141] libmachine: (kindnet-221184) DBG | I1007 13:56:53.686207  809254 retry.go:31] will retry after 4.212705249s: waiting for machine to come up
	I1007 13:56:57.900730  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:56:57.901277  809201 main.go:141] libmachine: (kindnet-221184) Found IP for machine: 192.168.50.180
	I1007 13:56:57.901303  809201 main.go:141] libmachine: (kindnet-221184) Reserving static IP address...
	I1007 13:56:57.901334  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has current primary IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:56:57.901674  809201 main.go:141] libmachine: (kindnet-221184) DBG | unable to find host DHCP lease matching {name: "kindnet-221184", mac: "52:54:00:07:2d:ce", ip: "192.168.50.180"} in network mk-kindnet-221184
	I1007 13:56:57.985837  809201 main.go:141] libmachine: (kindnet-221184) DBG | Getting to WaitForSSH function...
	I1007 13:56:57.985873  809201 main.go:141] libmachine: (kindnet-221184) Reserved static IP address: 192.168.50.180
	I1007 13:56:57.985904  809201 main.go:141] libmachine: (kindnet-221184) Waiting for SSH to be available...
	I1007 13:56:57.988584  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:56:57.988874  809201 main.go:141] libmachine: (kindnet-221184) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184
	I1007 13:56:57.988900  809201 main.go:141] libmachine: (kindnet-221184) DBG | unable to find defined IP address of network mk-kindnet-221184 interface with MAC address 52:54:00:07:2d:ce
	I1007 13:56:57.989086  809201 main.go:141] libmachine: (kindnet-221184) DBG | Using SSH client type: external
	I1007 13:56:57.989121  809201 main.go:141] libmachine: (kindnet-221184) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa (-rw-------)
	I1007 13:56:57.989154  809201 main.go:141] libmachine: (kindnet-221184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:56:57.989174  809201 main.go:141] libmachine: (kindnet-221184) DBG | About to run SSH command:
	I1007 13:56:57.989187  809201 main.go:141] libmachine: (kindnet-221184) DBG | exit 0
	I1007 13:56:57.992887  809201 main.go:141] libmachine: (kindnet-221184) DBG | SSH cmd err, output: exit status 255: 
	I1007 13:56:57.992908  809201 main.go:141] libmachine: (kindnet-221184) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 13:56:57.992915  809201 main.go:141] libmachine: (kindnet-221184) DBG | command : exit 0
	I1007 13:56:57.992920  809201 main.go:141] libmachine: (kindnet-221184) DBG | err     : exit status 255
	I1007 13:56:57.992926  809201 main.go:141] libmachine: (kindnet-221184) DBG | output  : 
	I1007 13:57:00.993159  809201 main.go:141] libmachine: (kindnet-221184) DBG | Getting to WaitForSSH function...
	I1007 13:57:00.996102  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:00.996449  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:00.996481  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:00.996647  809201 main.go:141] libmachine: (kindnet-221184) DBG | Using SSH client type: external
	I1007 13:57:00.996677  809201 main.go:141] libmachine: (kindnet-221184) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa (-rw-------)
	I1007 13:57:00.996718  809201 main.go:141] libmachine: (kindnet-221184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:57:00.996739  809201 main.go:141] libmachine: (kindnet-221184) DBG | About to run SSH command:
	I1007 13:57:00.996760  809201 main.go:141] libmachine: (kindnet-221184) DBG | exit 0
	I1007 13:57:01.118480  809201 main.go:141] libmachine: (kindnet-221184) DBG | SSH cmd err, output: <nil>: 
	I1007 13:57:01.118738  809201 main.go:141] libmachine: (kindnet-221184) KVM machine creation complete!
	I1007 13:57:01.119088  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetConfigRaw
	I1007 13:57:01.119707  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:01.119898  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:01.120043  809201 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:57:01.120056  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetState
	I1007 13:57:01.121477  809201 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:57:01.121495  809201 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:57:01.121505  809201 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:57:01.121513  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.124353  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.124749  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.124780  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.124988  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:01.125192  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.125344  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.125449  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:01.125591  809201 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:01.125850  809201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1007 13:57:01.125862  809201 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:57:01.221515  809201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:57:01.221542  809201 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:57:01.221550  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.224663  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.225117  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.225147  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.225312  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:01.225573  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.225793  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.225935  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:01.226279  809201 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:01.226465  809201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1007 13:57:01.226476  809201 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:57:01.327303  809201 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:57:01.327420  809201 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:57:01.327432  809201 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:57:01.327443  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetMachineName
	I1007 13:57:01.327745  809201 buildroot.go:166] provisioning hostname "kindnet-221184"
	I1007 13:57:01.327774  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetMachineName
	I1007 13:57:01.327992  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.330703  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.331073  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.331099  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.331376  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:01.331604  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.331754  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.331915  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:01.332055  809201 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:01.332263  809201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1007 13:57:01.332280  809201 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-221184 && echo "kindnet-221184" | sudo tee /etc/hostname
	I1007 13:57:01.445954  809201 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-221184
	
	I1007 13:57:01.445989  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.449702  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.450086  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.450124  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.450369  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:01.450598  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.450772  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.450882  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:01.451003  809201 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:01.451197  809201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1007 13:57:01.451213  809201 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-221184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-221184/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-221184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:57:01.559599  809201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:57:01.559637  809201 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:57:01.559696  809201 buildroot.go:174] setting up certificates
	I1007 13:57:01.559714  809201 provision.go:84] configureAuth start
	I1007 13:57:01.559730  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetMachineName
	I1007 13:57:01.560069  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetIP
	I1007 13:57:01.562902  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.563258  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.563288  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.563431  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.565768  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.566185  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.566213  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.566342  809201 provision.go:143] copyHostCerts
	I1007 13:57:01.566399  809201 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:57:01.566424  809201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:57:01.566486  809201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:57:01.566584  809201 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:57:01.566592  809201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:57:01.566615  809201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:57:01.566684  809201 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:57:01.566691  809201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:57:01.566711  809201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:57:01.566769  809201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.kindnet-221184 san=[127.0.0.1 192.168.50.180 kindnet-221184 localhost minikube]
	I1007 13:57:01.711437  809201 provision.go:177] copyRemoteCerts
	I1007 13:57:01.711507  809201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:57:01.711535  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.714524  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.714956  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.715013  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.715200  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:01.715416  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.715564  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:01.715699  809201 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa Username:docker}
	I1007 13:57:01.797520  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:57:01.824939  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:57:01.852315  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1007 13:57:01.879720  809201 provision.go:87] duration metric: took 319.98707ms to configureAuth
	I1007 13:57:01.879757  809201 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:57:01.879987  809201 config.go:182] Loaded profile config "kindnet-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:57:01.880090  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:01.883792  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.884181  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:01.884210  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:01.884442  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:01.884660  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.884843  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:01.884990  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:01.885136  809201 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:01.885314  809201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1007 13:57:01.885328  809201 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:57:02.108236  809201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:57:02.108269  809201 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:57:02.108279  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetURL
	I1007 13:57:02.109380  809201 main.go:141] libmachine: (kindnet-221184) DBG | Using libvirt version 6000000
	I1007 13:57:02.111641  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.112075  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.112167  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.112306  809201 main.go:141] libmachine: Docker is up and running!
	I1007 13:57:02.112319  809201 main.go:141] libmachine: Reticulating splines...
	I1007 13:57:02.112326  809201 client.go:171] duration metric: took 24.737179088s to LocalClient.Create
	I1007 13:57:02.112362  809201 start.go:167] duration metric: took 24.737288482s to libmachine.API.Create "kindnet-221184"
	I1007 13:57:02.112373  809201 start.go:293] postStartSetup for "kindnet-221184" (driver="kvm2")
	I1007 13:57:02.112383  809201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:57:02.112408  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:02.112633  809201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:57:02.112656  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:02.114823  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.115140  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.115168  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.115324  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:02.115480  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:02.115642  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:02.115777  809201 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa Username:docker}
	I1007 13:57:02.199077  809201 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:57:02.203962  809201 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:57:02.204004  809201 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:57:02.204144  809201 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:57:02.204242  809201 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:57:02.204359  809201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:57:02.215227  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:57:02.242809  809201 start.go:296] duration metric: took 130.421244ms for postStartSetup
	I1007 13:57:02.242868  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetConfigRaw
	I1007 13:57:02.243462  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetIP
	I1007 13:57:02.246223  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.246594  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.246621  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.246828  809201 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/config.json ...
	I1007 13:57:02.247043  809201 start.go:128] duration metric: took 24.897307993s to createHost
	I1007 13:57:02.247068  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:02.351681  809501 start.go:364] duration metric: took 23.764169567s to acquireMachinesLock for "calico-221184"
	I1007 13:57:02.351752  809501 start.go:93] Provisioning new machine with config: &{Name:calico-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:57:02.351910  809501 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:57:02.354000  809501 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 13:57:02.354244  809501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:02.354314  809501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:02.373830  809501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I1007 13:57:02.374413  809501 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:02.375058  809501 main.go:141] libmachine: Using API Version  1
	I1007 13:57:02.375081  809501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:02.375440  809501 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:02.375758  809501 main.go:141] libmachine: (calico-221184) Calling .GetMachineName
	I1007 13:57:02.375946  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:02.376147  809501 start.go:159] libmachine.API.Create for "calico-221184" (driver="kvm2")
	I1007 13:57:02.376179  809501 client.go:168] LocalClient.Create starting
	I1007 13:57:02.376213  809501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 13:57:02.376256  809501 main.go:141] libmachine: Decoding PEM data...
	I1007 13:57:02.376273  809501 main.go:141] libmachine: Parsing certificate...
	I1007 13:57:02.376327  809501 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 13:57:02.376345  809501 main.go:141] libmachine: Decoding PEM data...
	I1007 13:57:02.376355  809501 main.go:141] libmachine: Parsing certificate...
	I1007 13:57:02.376371  809501 main.go:141] libmachine: Running pre-create checks...
	I1007 13:57:02.376381  809501 main.go:141] libmachine: (calico-221184) Calling .PreCreateCheck
	I1007 13:57:02.376725  809501 main.go:141] libmachine: (calico-221184) Calling .GetConfigRaw
	I1007 13:57:02.377153  809501 main.go:141] libmachine: Creating machine...
	I1007 13:57:02.377171  809501 main.go:141] libmachine: (calico-221184) Calling .Create
	I1007 13:57:02.377348  809501 main.go:141] libmachine: (calico-221184) Creating KVM machine...
	I1007 13:57:02.378835  809501 main.go:141] libmachine: (calico-221184) DBG | found existing default KVM network
	I1007 13:57:02.380692  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:02.380463  810845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201610}
	I1007 13:57:02.380722  809501 main.go:141] libmachine: (calico-221184) DBG | created network xml: 
	I1007 13:57:02.380735  809501 main.go:141] libmachine: (calico-221184) DBG | <network>
	I1007 13:57:02.380743  809501 main.go:141] libmachine: (calico-221184) DBG |   <name>mk-calico-221184</name>
	I1007 13:57:02.380753  809501 main.go:141] libmachine: (calico-221184) DBG |   <dns enable='no'/>
	I1007 13:57:02.380763  809501 main.go:141] libmachine: (calico-221184) DBG |   
	I1007 13:57:02.380772  809501 main.go:141] libmachine: (calico-221184) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 13:57:02.380777  809501 main.go:141] libmachine: (calico-221184) DBG |     <dhcp>
	I1007 13:57:02.380786  809501 main.go:141] libmachine: (calico-221184) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 13:57:02.380796  809501 main.go:141] libmachine: (calico-221184) DBG |     </dhcp>
	I1007 13:57:02.380814  809501 main.go:141] libmachine: (calico-221184) DBG |   </ip>
	I1007 13:57:02.380824  809501 main.go:141] libmachine: (calico-221184) DBG |   
	I1007 13:57:02.380832  809501 main.go:141] libmachine: (calico-221184) DBG | </network>
	I1007 13:57:02.380846  809501 main.go:141] libmachine: (calico-221184) DBG | 
	I1007 13:57:02.387314  809501 main.go:141] libmachine: (calico-221184) DBG | trying to create private KVM network mk-calico-221184 192.168.39.0/24...
	I1007 13:57:02.475638  809501 main.go:141] libmachine: (calico-221184) DBG | private KVM network mk-calico-221184 192.168.39.0/24 created
	I1007 13:57:02.475675  809501 main.go:141] libmachine: (calico-221184) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184 ...
	I1007 13:57:02.475690  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:02.475594  810845 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:57:02.475712  809501 main.go:141] libmachine: (calico-221184) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:57:02.475730  809501 main.go:141] libmachine: (calico-221184) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:57:02.740223  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:02.740080  810845 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa...
	I1007 13:57:02.846243  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:02.846097  810845 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/calico-221184.rawdisk...
	I1007 13:57:02.846272  809501 main.go:141] libmachine: (calico-221184) DBG | Writing magic tar header
	I1007 13:57:02.846283  809501 main.go:141] libmachine: (calico-221184) DBG | Writing SSH key tar header
	I1007 13:57:02.846294  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:02.846220  810845 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184 ...
	I1007 13:57:02.846308  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184
	I1007 13:57:02.846325  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 13:57:02.846337  809501 main.go:141] libmachine: (calico-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184 (perms=drwx------)
	I1007 13:57:02.846345  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:57:02.846358  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 13:57:02.846407  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:57:02.846427  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:57:02.846439  809501 main.go:141] libmachine: (calico-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:57:02.846459  809501 main.go:141] libmachine: (calico-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 13:57:02.846484  809501 main.go:141] libmachine: (calico-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 13:57:02.846495  809501 main.go:141] libmachine: (calico-221184) DBG | Checking permissions on dir: /home
	I1007 13:57:02.846508  809501 main.go:141] libmachine: (calico-221184) DBG | Skipping /home - not owner
	I1007 13:57:02.846517  809501 main.go:141] libmachine: (calico-221184) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:57:02.846528  809501 main.go:141] libmachine: (calico-221184) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:57:02.846536  809501 main.go:141] libmachine: (calico-221184) Creating domain...
	I1007 13:57:02.847838  809501 main.go:141] libmachine: (calico-221184) define libvirt domain using xml: 
	I1007 13:57:02.847864  809501 main.go:141] libmachine: (calico-221184) <domain type='kvm'>
	I1007 13:57:02.847874  809501 main.go:141] libmachine: (calico-221184)   <name>calico-221184</name>
	I1007 13:57:02.847885  809501 main.go:141] libmachine: (calico-221184)   <memory unit='MiB'>3072</memory>
	I1007 13:57:02.847892  809501 main.go:141] libmachine: (calico-221184)   <vcpu>2</vcpu>
	I1007 13:57:02.847899  809501 main.go:141] libmachine: (calico-221184)   <features>
	I1007 13:57:02.847907  809501 main.go:141] libmachine: (calico-221184)     <acpi/>
	I1007 13:57:02.847914  809501 main.go:141] libmachine: (calico-221184)     <apic/>
	I1007 13:57:02.847924  809501 main.go:141] libmachine: (calico-221184)     <pae/>
	I1007 13:57:02.847945  809501 main.go:141] libmachine: (calico-221184)     
	I1007 13:57:02.847957  809501 main.go:141] libmachine: (calico-221184)   </features>
	I1007 13:57:02.847964  809501 main.go:141] libmachine: (calico-221184)   <cpu mode='host-passthrough'>
	I1007 13:57:02.847982  809501 main.go:141] libmachine: (calico-221184)   
	I1007 13:57:02.847994  809501 main.go:141] libmachine: (calico-221184)   </cpu>
	I1007 13:57:02.848002  809501 main.go:141] libmachine: (calico-221184)   <os>
	I1007 13:57:02.848014  809501 main.go:141] libmachine: (calico-221184)     <type>hvm</type>
	I1007 13:57:02.848022  809501 main.go:141] libmachine: (calico-221184)     <boot dev='cdrom'/>
	I1007 13:57:02.848034  809501 main.go:141] libmachine: (calico-221184)     <boot dev='hd'/>
	I1007 13:57:02.848046  809501 main.go:141] libmachine: (calico-221184)     <bootmenu enable='no'/>
	I1007 13:57:02.848080  809501 main.go:141] libmachine: (calico-221184)   </os>
	I1007 13:57:02.848098  809501 main.go:141] libmachine: (calico-221184)   <devices>
	I1007 13:57:02.848106  809501 main.go:141] libmachine: (calico-221184)     <disk type='file' device='cdrom'>
	I1007 13:57:02.848115  809501 main.go:141] libmachine: (calico-221184)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/boot2docker.iso'/>
	I1007 13:57:02.848121  809501 main.go:141] libmachine: (calico-221184)       <target dev='hdc' bus='scsi'/>
	I1007 13:57:02.848127  809501 main.go:141] libmachine: (calico-221184)       <readonly/>
	I1007 13:57:02.848139  809501 main.go:141] libmachine: (calico-221184)     </disk>
	I1007 13:57:02.848151  809501 main.go:141] libmachine: (calico-221184)     <disk type='file' device='disk'>
	I1007 13:57:02.848198  809501 main.go:141] libmachine: (calico-221184)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:57:02.848228  809501 main.go:141] libmachine: (calico-221184)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/calico-221184.rawdisk'/>
	I1007 13:57:02.848241  809501 main.go:141] libmachine: (calico-221184)       <target dev='hda' bus='virtio'/>
	I1007 13:57:02.848251  809501 main.go:141] libmachine: (calico-221184)     </disk>
	I1007 13:57:02.848261  809501 main.go:141] libmachine: (calico-221184)     <interface type='network'>
	I1007 13:57:02.848272  809501 main.go:141] libmachine: (calico-221184)       <source network='mk-calico-221184'/>
	I1007 13:57:02.848281  809501 main.go:141] libmachine: (calico-221184)       <model type='virtio'/>
	I1007 13:57:02.848292  809501 main.go:141] libmachine: (calico-221184)     </interface>
	I1007 13:57:02.848303  809501 main.go:141] libmachine: (calico-221184)     <interface type='network'>
	I1007 13:57:02.848314  809501 main.go:141] libmachine: (calico-221184)       <source network='default'/>
	I1007 13:57:02.848322  809501 main.go:141] libmachine: (calico-221184)       <model type='virtio'/>
	I1007 13:57:02.848334  809501 main.go:141] libmachine: (calico-221184)     </interface>
	I1007 13:57:02.848347  809501 main.go:141] libmachine: (calico-221184)     <serial type='pty'>
	I1007 13:57:02.848360  809501 main.go:141] libmachine: (calico-221184)       <target port='0'/>
	I1007 13:57:02.848370  809501 main.go:141] libmachine: (calico-221184)     </serial>
	I1007 13:57:02.848376  809501 main.go:141] libmachine: (calico-221184)     <console type='pty'>
	I1007 13:57:02.848394  809501 main.go:141] libmachine: (calico-221184)       <target type='serial' port='0'/>
	I1007 13:57:02.848412  809501 main.go:141] libmachine: (calico-221184)     </console>
	I1007 13:57:02.848423  809501 main.go:141] libmachine: (calico-221184)     <rng model='virtio'>
	I1007 13:57:02.848432  809501 main.go:141] libmachine: (calico-221184)       <backend model='random'>/dev/random</backend>
	I1007 13:57:02.848442  809501 main.go:141] libmachine: (calico-221184)     </rng>
	I1007 13:57:02.848452  809501 main.go:141] libmachine: (calico-221184)     
	I1007 13:57:02.848480  809501 main.go:141] libmachine: (calico-221184)     
	I1007 13:57:02.848506  809501 main.go:141] libmachine: (calico-221184)   </devices>
	I1007 13:57:02.848519  809501 main.go:141] libmachine: (calico-221184) </domain>
	I1007 13:57:02.848532  809501 main.go:141] libmachine: (calico-221184) 
	I1007 13:57:02.853373  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:91:21:32 in network default
	I1007 13:57:02.854074  809501 main.go:141] libmachine: (calico-221184) Ensuring networks are active...
	I1007 13:57:02.854102  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:02.854893  809501 main.go:141] libmachine: (calico-221184) Ensuring network default is active
	I1007 13:57:02.855261  809501 main.go:141] libmachine: (calico-221184) Ensuring network mk-calico-221184 is active
	I1007 13:57:02.855915  809501 main.go:141] libmachine: (calico-221184) Getting domain xml...
	I1007 13:57:02.856752  809501 main.go:141] libmachine: (calico-221184) Creating domain...
	I1007 13:57:03.239101  809501 main.go:141] libmachine: (calico-221184) Waiting to get IP...
	I1007 13:57:03.239930  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:03.240508  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:03.240532  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:03.240487  810845 retry.go:31] will retry after 270.812851ms: waiting for machine to come up
	I1007 13:57:02.249552  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.249949  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.249977  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.250200  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:02.250393  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:02.250564  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:02.250697  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:02.250865  809201 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:02.251078  809201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1007 13:57:02.251090  809201 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:57:02.351476  809201 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728309422.326319193
	
	I1007 13:57:02.351509  809201 fix.go:216] guest clock: 1728309422.326319193
	I1007 13:57:02.351517  809201 fix.go:229] Guest: 2024-10-07 13:57:02.326319193 +0000 UTC Remote: 2024-10-07 13:57:02.247057168 +0000 UTC m=+25.062462694 (delta=79.262025ms)
	I1007 13:57:02.351539  809201 fix.go:200] guest clock delta is within tolerance: 79.262025ms
	I1007 13:57:02.351544  809201 start.go:83] releasing machines lock for "kindnet-221184", held for 25.001947967s
	I1007 13:57:02.351568  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:02.352312  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetIP
	I1007 13:57:02.355435  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.355788  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.355824  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.356052  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:02.356728  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:02.356978  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:02.357108  809201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:57:02.357163  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:02.357244  809201 ssh_runner.go:195] Run: cat /version.json
	I1007 13:57:02.357279  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:02.360639  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.360925  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.361126  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.361170  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.361308  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:02.361331  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:02.361354  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:02.361590  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:02.361594  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:02.361743  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:02.361749  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:02.361929  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:02.361955  809201 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa Username:docker}
	I1007 13:57:02.362091  809201 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa Username:docker}
	I1007 13:57:02.462158  809201 ssh_runner.go:195] Run: systemctl --version
	I1007 13:57:02.469550  809201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:57:02.641601  809201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:57:02.647951  809201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:57:02.648040  809201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:57:02.665923  809201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:57:02.665959  809201 start.go:495] detecting cgroup driver to use...
	I1007 13:57:02.666079  809201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:57:02.682682  809201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:57:02.698317  809201 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:57:02.698402  809201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:57:02.713232  809201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:57:02.728270  809201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:57:02.857068  809201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:57:03.049041  809201 docker.go:233] disabling docker service ...
	I1007 13:57:03.049126  809201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:57:03.067965  809201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:57:03.084960  809201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:57:03.212477  809201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:57:03.345637  809201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:57:03.360869  809201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:57:03.385320  809201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:57:03.385400  809201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.398955  809201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:57:03.399039  809201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.410974  809201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.423176  809201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.435129  809201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:57:03.447104  809201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.458705  809201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.479125  809201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:03.491710  809201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:57:03.502034  809201 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:57:03.502104  809201 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:57:03.516757  809201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:57:03.527997  809201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:57:03.666176  809201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:57:03.769015  809201 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:57:03.769132  809201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:57:03.775195  809201 start.go:563] Will wait 60s for crictl version
	I1007 13:57:03.775276  809201 ssh_runner.go:195] Run: which crictl
	I1007 13:57:03.779924  809201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:57:03.826733  809201 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:57:03.826827  809201 ssh_runner.go:195] Run: crio --version
	I1007 13:57:03.858429  809201 ssh_runner.go:195] Run: crio --version
	I1007 13:57:03.894589  809201 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:57:03.895682  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetIP
	I1007 13:57:03.898657  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:03.899082  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:03.899110  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:03.899341  809201 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1007 13:57:03.905023  809201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:57:03.920266  809201 kubeadm.go:883] updating cluster {Name:kindnet-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:kindnet-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:57:03.920382  809201 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:57:03.920432  809201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:57:03.958528  809201 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:57:03.958624  809201 ssh_runner.go:195] Run: which lz4
	I1007 13:57:03.964118  809201 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:57:03.969266  809201 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:57:03.969311  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:57:05.591976  809201 crio.go:462] duration metric: took 1.627979385s to copy over tarball
	I1007 13:57:05.592160  809201 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:57:03.512992  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:03.513519  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:03.513570  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:03.513476  810845 retry.go:31] will retry after 364.863643ms: waiting for machine to come up
	I1007 13:57:03.880282  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:03.880816  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:03.880845  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:03.880774  810845 retry.go:31] will retry after 395.05534ms: waiting for machine to come up
	I1007 13:57:04.277514  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:04.278581  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:04.278614  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:04.278541  810845 retry.go:31] will retry after 583.082481ms: waiting for machine to come up
	I1007 13:57:04.864031  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:04.865060  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:04.865101  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:04.865035  810845 retry.go:31] will retry after 725.866991ms: waiting for machine to come up
	I1007 13:57:05.593082  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:05.593631  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:05.593662  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:05.593604  810845 retry.go:31] will retry after 898.630639ms: waiting for machine to come up
	I1007 13:57:06.494074  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:06.494424  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:06.494476  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:06.494408  810845 retry.go:31] will retry after 820.132671ms: waiting for machine to come up
	I1007 13:57:07.315807  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:07.316282  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:07.316311  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:07.316231  810845 retry.go:31] will retry after 1.130598675s: waiting for machine to come up
	I1007 13:57:08.448269  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:08.448786  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:08.448815  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:08.448733  810845 retry.go:31] will retry after 1.48161073s: waiting for machine to come up
	I1007 13:57:07.984649  809201 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.392439306s)
	I1007 13:57:07.984682  809201 crio.go:469] duration metric: took 2.39266432s to extract the tarball
	I1007 13:57:07.984701  809201 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:57:08.025237  809201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:57:08.083148  809201 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:57:08.083184  809201 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:57:08.083196  809201 kubeadm.go:934] updating node { 192.168.50.180 8443 v1.31.1 crio true true} ...
	I1007 13:57:08.083337  809201 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-221184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kindnet-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1007 13:57:08.083438  809201 ssh_runner.go:195] Run: crio config
	I1007 13:57:08.146969  809201 cni.go:84] Creating CNI manager for "kindnet"
	I1007 13:57:08.147002  809201 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:57:08.147035  809201 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-221184 NodeName:kindnet-221184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:57:08.147195  809201 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-221184"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:57:08.147270  809201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:57:08.162313  809201 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:57:08.162410  809201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:57:08.176469  809201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1007 13:57:08.194651  809201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:57:08.212415  809201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1007 13:57:08.231471  809201 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1007 13:57:08.235843  809201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:57:08.249657  809201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:57:08.383844  809201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:57:08.403654  809201 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184 for IP: 192.168.50.180
	I1007 13:57:08.403719  809201 certs.go:194] generating shared ca certs ...
	I1007 13:57:08.403749  809201 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.404014  809201 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:57:08.404065  809201 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:57:08.404074  809201 certs.go:256] generating profile certs ...
	I1007 13:57:08.404153  809201 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/client.key
	I1007 13:57:08.404172  809201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/client.crt with IP's: []
	I1007 13:57:08.776702  809201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/client.crt ...
	I1007 13:57:08.776740  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/client.crt: {Name:mk36649ad58a1db03ea0e6c2c68c441497fd795e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.776970  809201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/client.key ...
	I1007 13:57:08.776985  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/client.key: {Name:mk646adbd8e92145c408d3379ad6cc6f85495ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.777104  809201 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.key.b2a23201
	I1007 13:57:08.777128  809201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.crt.b2a23201 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.180]
	I1007 13:57:08.877802  809201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.crt.b2a23201 ...
	I1007 13:57:08.877840  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.crt.b2a23201: {Name:mkc43afa9e772243137e59a8c8059d4e7dd4f626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.878057  809201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.key.b2a23201 ...
	I1007 13:57:08.878072  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.key.b2a23201: {Name:mk8edf66410e2e3c1bf7779ceb2b81009ef5ea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.878172  809201 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.crt.b2a23201 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.crt
	I1007 13:57:08.878283  809201 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.key.b2a23201 -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.key
	I1007 13:57:08.878368  809201 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.key
	I1007 13:57:08.878392  809201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.crt with IP's: []
	I1007 13:57:08.920840  809201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.crt ...
	I1007 13:57:08.920878  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.crt: {Name:mk287dac1243a57de37ca21c6c7015c7fc089959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.921078  809201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.key ...
	I1007 13:57:08.921095  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.key: {Name:mka76c678c6a09eaf7cbdea308d565821cf19afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:08.921334  809201 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:57:08.921387  809201 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:57:08.921403  809201 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:57:08.921433  809201 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:57:08.921487  809201 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:57:08.921533  809201 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:57:08.921585  809201 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:57:08.922326  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:57:08.951791  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:57:08.980248  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:57:09.008272  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:57:09.034918  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 13:57:09.062451  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:57:09.095311  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:57:09.132448  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/kindnet-221184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:57:09.159766  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:57:09.190739  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:57:09.218625  809201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:57:09.247765  809201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:57:09.268836  809201 ssh_runner.go:195] Run: openssl version
	I1007 13:57:09.276062  809201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:57:09.290131  809201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:57:09.295499  809201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:57:09.295566  809201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:57:09.302347  809201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:57:09.315922  809201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:57:09.330799  809201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:57:09.336544  809201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:57:09.336614  809201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:57:09.343422  809201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:57:09.358389  809201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:57:09.371970  809201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:57:09.377069  809201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:57:09.377149  809201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:57:09.384037  809201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:57:09.398317  809201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:57:09.403781  809201 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:57:09.403859  809201 kubeadm.go:392] StartCluster: {Name:kindnet-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:kindnet-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:57:09.403949  809201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:57:09.404010  809201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:57:09.450248  809201 cri.go:89] found id: ""
	I1007 13:57:09.450340  809201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:57:09.462902  809201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:57:09.474254  809201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:57:09.486744  809201 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:57:09.486770  809201 kubeadm.go:157] found existing configuration files:
	
	I1007 13:57:09.486831  809201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:57:09.497046  809201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:57:09.497123  809201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:57:09.508380  809201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:57:09.520607  809201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:57:09.520689  809201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:57:09.533606  809201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:57:09.545531  809201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:57:09.545605  809201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:57:09.557968  809201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:57:09.569918  809201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:57:09.569996  809201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:57:09.581123  809201 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:57:09.641911  809201 kubeadm.go:310] W1007 13:57:09.624981     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:57:09.644464  809201 kubeadm.go:310] W1007 13:57:09.627885     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:57:09.790271  809201 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:57:09.932400  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:09.932879  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:09.932907  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:09.932839  810845 retry.go:31] will retry after 2.059796696s: waiting for machine to come up
	I1007 13:57:11.994629  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:11.995128  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:11.995157  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:11.995085  810845 retry.go:31] will retry after 2.885676545s: waiting for machine to come up
	I1007 13:57:14.883459  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:14.884041  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:14.884080  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:14.883964  810845 retry.go:31] will retry after 3.228285615s: waiting for machine to come up
	I1007 13:57:18.113445  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:18.113944  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find current IP address of domain calico-221184 in network mk-calico-221184
	I1007 13:57:18.113977  809501 main.go:141] libmachine: (calico-221184) DBG | I1007 13:57:18.113894  810845 retry.go:31] will retry after 4.346472819s: waiting for machine to come up
	I1007 13:57:19.695472  809201 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:57:19.695562  809201 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:57:19.695657  809201 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:57:19.695815  809201 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:57:19.695984  809201 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:57:19.696086  809201 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:57:19.698213  809201 out.go:235]   - Generating certificates and keys ...
	I1007 13:57:19.698309  809201 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:57:19.698379  809201 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:57:19.698459  809201 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:57:19.698518  809201 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:57:19.698578  809201 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:57:19.698622  809201 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:57:19.698672  809201 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:57:19.698775  809201 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-221184 localhost] and IPs [192.168.50.180 127.0.0.1 ::1]
	I1007 13:57:19.698845  809201 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:57:19.699006  809201 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-221184 localhost] and IPs [192.168.50.180 127.0.0.1 ::1]
	I1007 13:57:19.699082  809201 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:57:19.699139  809201 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:57:19.699180  809201 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:57:19.699231  809201 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:57:19.699281  809201 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:57:19.699333  809201 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:57:19.699394  809201 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:57:19.699461  809201 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:57:19.699514  809201 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:57:19.699582  809201 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:57:19.699674  809201 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:57:19.701230  809201 out.go:235]   - Booting up control plane ...
	I1007 13:57:19.701312  809201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:57:19.701380  809201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:57:19.701445  809201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:57:19.701558  809201 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:57:19.701637  809201 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:57:19.701673  809201 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:57:19.701803  809201 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:57:19.701937  809201 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:57:19.702011  809201 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.244884ms
	I1007 13:57:19.702110  809201 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:57:19.702166  809201 kubeadm.go:310] [api-check] The API server is healthy after 5.503099739s
	I1007 13:57:19.702254  809201 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:57:19.702358  809201 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:57:19.702413  809201 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:57:19.702597  809201 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-221184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:57:19.702664  809201 kubeadm.go:310] [bootstrap-token] Using token: 8lhpsx.crsogb8kpwdb7eqt
	I1007 13:57:19.704970  809201 out.go:235]   - Configuring RBAC rules ...
	I1007 13:57:19.705096  809201 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:57:19.705172  809201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:57:19.705325  809201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:57:19.705471  809201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:57:19.705633  809201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:57:19.705711  809201 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:57:19.705870  809201 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:57:19.705959  809201 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:57:19.706003  809201 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:57:19.706009  809201 kubeadm.go:310] 
	I1007 13:57:19.706127  809201 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:57:19.706142  809201 kubeadm.go:310] 
	I1007 13:57:19.706243  809201 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:57:19.706250  809201 kubeadm.go:310] 
	I1007 13:57:19.706279  809201 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:57:19.706330  809201 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:57:19.706380  809201 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:57:19.706388  809201 kubeadm.go:310] 
	I1007 13:57:19.706432  809201 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:57:19.706438  809201 kubeadm.go:310] 
	I1007 13:57:19.706477  809201 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:57:19.706483  809201 kubeadm.go:310] 
	I1007 13:57:19.706526  809201 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:57:19.706603  809201 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:57:19.706664  809201 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:57:19.706670  809201 kubeadm.go:310] 
	I1007 13:57:19.706748  809201 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:57:19.706814  809201 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:57:19.706821  809201 kubeadm.go:310] 
	I1007 13:57:19.706910  809201 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8lhpsx.crsogb8kpwdb7eqt \
	I1007 13:57:19.707004  809201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:57:19.707028  809201 kubeadm.go:310] 	--control-plane 
	I1007 13:57:19.707034  809201 kubeadm.go:310] 
	I1007 13:57:19.707110  809201 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:57:19.707116  809201 kubeadm.go:310] 
	I1007 13:57:19.707186  809201 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8lhpsx.crsogb8kpwdb7eqt \
	I1007 13:57:19.707290  809201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:57:19.707302  809201 cni.go:84] Creating CNI manager for "kindnet"
	I1007 13:57:19.708764  809201 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 13:57:19.709998  809201 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 13:57:19.715787  809201 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 13:57:19.715808  809201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 13:57:19.734667  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 13:57:20.037464  809201 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:57:20.037545  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:20.037560  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-221184 minikube.k8s.io/updated_at=2024_10_07T13_57_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=kindnet-221184 minikube.k8s.io/primary=true
	I1007 13:57:20.302113  809201 ops.go:34] apiserver oom_adj: -16
	I1007 13:57:20.302146  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:20.802376  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:21.302633  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:21.802858  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:22.302223  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:22.802286  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:23.302905  809201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:23.396128  809201 kubeadm.go:1113] duration metric: took 3.358650733s to wait for elevateKubeSystemPrivileges
	I1007 13:57:23.396184  809201 kubeadm.go:394] duration metric: took 13.992332723s to StartCluster
	I1007 13:57:23.396213  809201 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:23.396299  809201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:57:23.397368  809201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:23.397634  809201 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:57:23.397645  809201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:57:23.397679  809201 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:57:23.397798  809201 addons.go:69] Setting storage-provisioner=true in profile "kindnet-221184"
	I1007 13:57:23.397822  809201 addons.go:234] Setting addon storage-provisioner=true in "kindnet-221184"
	I1007 13:57:23.397822  809201 addons.go:69] Setting default-storageclass=true in profile "kindnet-221184"
	I1007 13:57:23.397854  809201 host.go:66] Checking if "kindnet-221184" exists ...
	I1007 13:57:23.397865  809201 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-221184"
	I1007 13:57:23.397883  809201 config.go:182] Loaded profile config "kindnet-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:57:23.398284  809201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:23.398319  809201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:23.398327  809201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:23.398352  809201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:23.399936  809201 out.go:177] * Verifying Kubernetes components...
	I1007 13:57:23.401456  809201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:57:23.414725  809201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I1007 13:57:23.414973  809201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1007 13:57:23.415478  809201 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:23.415566  809201 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:23.416061  809201 main.go:141] libmachine: Using API Version  1
	I1007 13:57:23.416081  809201 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:23.416228  809201 main.go:141] libmachine: Using API Version  1
	I1007 13:57:23.416248  809201 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:23.416481  809201 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:23.416679  809201 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:23.416872  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetState
	I1007 13:57:23.417098  809201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:23.417146  809201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:23.420708  809201 addons.go:234] Setting addon default-storageclass=true in "kindnet-221184"
	I1007 13:57:23.420760  809201 host.go:66] Checking if "kindnet-221184" exists ...
	I1007 13:57:23.421136  809201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:23.421183  809201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:23.436453  809201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
	I1007 13:57:23.436869  809201 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:23.436983  809201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35571
	I1007 13:57:23.437534  809201 main.go:141] libmachine: Using API Version  1
	I1007 13:57:23.437557  809201 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:23.437576  809201 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:23.437918  809201 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:23.438062  809201 main.go:141] libmachine: Using API Version  1
	I1007 13:57:23.438070  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetState
	I1007 13:57:23.438083  809201 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:23.438405  809201 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:23.439667  809201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:23.439728  809201 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:23.440376  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:23.442718  809201 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:57:22.462596  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:22.463120  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has current primary IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:22.463141  809501 main.go:141] libmachine: (calico-221184) Found IP for machine: 192.168.39.199
	I1007 13:57:22.463150  809501 main.go:141] libmachine: (calico-221184) Reserving static IP address...
	I1007 13:57:22.463455  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find host DHCP lease matching {name: "calico-221184", mac: "52:54:00:2a:5e:47", ip: "192.168.39.199"} in network mk-calico-221184
	I1007 13:57:22.553745  809501 main.go:141] libmachine: (calico-221184) DBG | Getting to WaitForSSH function...
	I1007 13:57:22.553780  809501 main.go:141] libmachine: (calico-221184) Reserved static IP address: 192.168.39.199
	I1007 13:57:22.553793  809501 main.go:141] libmachine: (calico-221184) Waiting for SSH to be available...
	I1007 13:57:22.556615  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:22.556932  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184
	I1007 13:57:22.556957  809501 main.go:141] libmachine: (calico-221184) DBG | unable to find defined IP address of network mk-calico-221184 interface with MAC address 52:54:00:2a:5e:47
	I1007 13:57:22.557134  809501 main.go:141] libmachine: (calico-221184) DBG | Using SSH client type: external
	I1007 13:57:22.557166  809501 main.go:141] libmachine: (calico-221184) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa (-rw-------)
	I1007 13:57:22.557206  809501 main.go:141] libmachine: (calico-221184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:57:22.557227  809501 main.go:141] libmachine: (calico-221184) DBG | About to run SSH command:
	I1007 13:57:22.557241  809501 main.go:141] libmachine: (calico-221184) DBG | exit 0
	I1007 13:57:22.560858  809501 main.go:141] libmachine: (calico-221184) DBG | SSH cmd err, output: exit status 255: 
	I1007 13:57:22.560891  809501 main.go:141] libmachine: (calico-221184) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 13:57:22.560901  809501 main.go:141] libmachine: (calico-221184) DBG | command : exit 0
	I1007 13:57:22.560906  809501 main.go:141] libmachine: (calico-221184) DBG | err     : exit status 255
	I1007 13:57:22.560913  809501 main.go:141] libmachine: (calico-221184) DBG | output  : 
	I1007 13:57:23.444449  809201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:57:23.444473  809201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:57:23.444500  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:23.447504  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:23.447848  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:23.447869  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:23.448065  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:23.448248  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:23.448368  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:23.448531  809201 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa Username:docker}
	I1007 13:57:23.457628  809201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34821
	I1007 13:57:23.458194  809201 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:23.458719  809201 main.go:141] libmachine: Using API Version  1
	I1007 13:57:23.458734  809201 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:23.459239  809201 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:23.459425  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetState
	I1007 13:57:23.461283  809201 main.go:141] libmachine: (kindnet-221184) Calling .DriverName
	I1007 13:57:23.461522  809201 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:57:23.461535  809201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:57:23.461550  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHHostname
	I1007 13:57:23.464310  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:23.464693  809201 main.go:141] libmachine: (kindnet-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:2d:ce", ip: ""} in network mk-kindnet-221184: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:52 +0000 UTC Type:0 Mac:52:54:00:07:2d:ce Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:kindnet-221184 Clientid:01:52:54:00:07:2d:ce}
	I1007 13:57:23.464708  809201 main.go:141] libmachine: (kindnet-221184) DBG | domain kindnet-221184 has defined IP address 192.168.50.180 and MAC address 52:54:00:07:2d:ce in network mk-kindnet-221184
	I1007 13:57:23.464900  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHPort
	I1007 13:57:23.465091  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHKeyPath
	I1007 13:57:23.465210  809201 main.go:141] libmachine: (kindnet-221184) Calling .GetSSHUsername
	I1007 13:57:23.465361  809201 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/kindnet-221184/id_rsa Username:docker}
	I1007 13:57:23.635117  809201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:57:23.698036  809201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:57:23.796666  809201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:57:23.858331  809201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:57:24.518254  809201 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1007 13:57:24.519593  809201 node_ready.go:35] waiting up to 15m0s for node "kindnet-221184" to be "Ready" ...
	I1007 13:57:24.858451  809201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061703183s)
	I1007 13:57:24.858541  809201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.000159785s)
	I1007 13:57:24.858557  809201 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:24.858577  809201 main.go:141] libmachine: (kindnet-221184) Calling .Close
	I1007 13:57:24.858586  809201 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:24.858596  809201 main.go:141] libmachine: (kindnet-221184) Calling .Close
	I1007 13:57:24.859077  809201 main.go:141] libmachine: (kindnet-221184) DBG | Closing plugin on server side
	I1007 13:57:24.859117  809201 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:24.859136  809201 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:24.859143  809201 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:24.859150  809201 main.go:141] libmachine: (kindnet-221184) Calling .Close
	I1007 13:57:24.859091  809201 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:24.859203  809201 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:24.859211  809201 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:24.859217  809201 main.go:141] libmachine: (kindnet-221184) Calling .Close
	I1007 13:57:24.859295  809201 main.go:141] libmachine: (kindnet-221184) DBG | Closing plugin on server side
	I1007 13:57:24.859552  809201 main.go:141] libmachine: (kindnet-221184) DBG | Closing plugin on server side
	I1007 13:57:24.859568  809201 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:24.859600  809201 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:24.859619  809201 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:24.859603  809201 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:24.859605  809201 main.go:141] libmachine: (kindnet-221184) DBG | Closing plugin on server side
	I1007 13:57:24.889001  809201 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:24.889031  809201 main.go:141] libmachine: (kindnet-221184) Calling .Close
	I1007 13:57:24.889483  809201 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:24.889533  809201 main.go:141] libmachine: (kindnet-221184) DBG | Closing plugin on server side
	I1007 13:57:24.889537  809201 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:24.891801  809201 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 13:57:26.991949  810702 start.go:364] duration metric: took 39.142738973s to acquireMachinesLock for "custom-flannel-221184"
	I1007 13:57:26.992044  810702 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:57:26.992170  810702 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:57:24.893210  809201 addons.go:510] duration metric: took 1.495534257s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 13:57:25.023917  809201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-221184" context rescaled to 1 replicas
	I1007 13:57:26.523668  809201 node_ready.go:53] node "kindnet-221184" has status "Ready":"False"
	I1007 13:57:26.994819  810702 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 13:57:26.995095  810702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:26.995166  810702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:27.013601  810702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1007 13:57:27.014191  810702 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:27.014930  810702 main.go:141] libmachine: Using API Version  1
	I1007 13:57:27.014956  810702 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:27.015398  810702 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:27.015645  810702 main.go:141] libmachine: (custom-flannel-221184) Calling .GetMachineName
	I1007 13:57:27.015828  810702 main.go:141] libmachine: (custom-flannel-221184) Calling .DriverName
	I1007 13:57:27.015993  810702 start.go:159] libmachine.API.Create for "custom-flannel-221184" (driver="kvm2")
	I1007 13:57:27.016028  810702 client.go:168] LocalClient.Create starting
	I1007 13:57:27.016061  810702 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 13:57:27.016115  810702 main.go:141] libmachine: Decoding PEM data...
	I1007 13:57:27.016142  810702 main.go:141] libmachine: Parsing certificate...
	I1007 13:57:27.016222  810702 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 13:57:27.016251  810702 main.go:141] libmachine: Decoding PEM data...
	I1007 13:57:27.016266  810702 main.go:141] libmachine: Parsing certificate...
	I1007 13:57:27.016283  810702 main.go:141] libmachine: Running pre-create checks...
	I1007 13:57:27.016296  810702 main.go:141] libmachine: (custom-flannel-221184) Calling .PreCreateCheck
	I1007 13:57:27.016710  810702 main.go:141] libmachine: (custom-flannel-221184) Calling .GetConfigRaw
	I1007 13:57:27.017183  810702 main.go:141] libmachine: Creating machine...
	I1007 13:57:27.017198  810702 main.go:141] libmachine: (custom-flannel-221184) Calling .Create
	I1007 13:57:27.017407  810702 main.go:141] libmachine: (custom-flannel-221184) Creating KVM machine...
	I1007 13:57:27.018892  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | found existing default KVM network
	I1007 13:57:27.020946  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.020710  811115 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:84:45:b1} reservation:<nil>}
	I1007 13:57:27.022139  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.021989  811115 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2a:b1:a7} reservation:<nil>}
	I1007 13:57:27.023198  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.023104  811115 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:40:af} reservation:<nil>}
	I1007 13:57:27.024825  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.024742  811115 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003971b0}
	I1007 13:57:27.024896  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | created network xml: 
	I1007 13:57:27.024916  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | <network>
	I1007 13:57:27.024931  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |   <name>mk-custom-flannel-221184</name>
	I1007 13:57:27.024941  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |   <dns enable='no'/>
	I1007 13:57:27.024950  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |   
	I1007 13:57:27.024959  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1007 13:57:27.024967  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |     <dhcp>
	I1007 13:57:27.024996  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1007 13:57:27.025006  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |     </dhcp>
	I1007 13:57:27.025013  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |   </ip>
	I1007 13:57:27.025021  810702 main.go:141] libmachine: (custom-flannel-221184) DBG |   
	I1007 13:57:27.025034  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | </network>
	I1007 13:57:27.025049  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | 
	I1007 13:57:27.030652  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | trying to create private KVM network mk-custom-flannel-221184 192.168.72.0/24...
	I1007 13:57:27.114145  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | private KVM network mk-custom-flannel-221184 192.168.72.0/24 created
	I1007 13:57:27.114214  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.114134  811115 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:57:27.114227  810702 main.go:141] libmachine: (custom-flannel-221184) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184 ...
	I1007 13:57:27.114245  810702 main.go:141] libmachine: (custom-flannel-221184) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:57:27.114328  810702 main.go:141] libmachine: (custom-flannel-221184) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:57:27.410327  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.410193  811115 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184/id_rsa...
	I1007 13:57:27.717166  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.717017  811115 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184/custom-flannel-221184.rawdisk...
	I1007 13:57:27.717226  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Writing magic tar header
	I1007 13:57:27.717251  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Writing SSH key tar header
	I1007 13:57:27.717265  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:27.717136  811115 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184 ...
	I1007 13:57:27.717281  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184
	I1007 13:57:27.717368  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 13:57:27.717401  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:57:27.717414  810702 main.go:141] libmachine: (custom-flannel-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184 (perms=drwx------)
	I1007 13:57:27.717424  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 13:57:27.717443  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:57:27.717456  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:57:27.717466  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Checking permissions on dir: /home
	I1007 13:57:27.717477  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | Skipping /home - not owner
	I1007 13:57:27.717492  810702 main.go:141] libmachine: (custom-flannel-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:57:27.717511  810702 main.go:141] libmachine: (custom-flannel-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 13:57:27.717566  810702 main.go:141] libmachine: (custom-flannel-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 13:57:27.717587  810702 main.go:141] libmachine: (custom-flannel-221184) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:57:27.717601  810702 main.go:141] libmachine: (custom-flannel-221184) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:57:27.717618  810702 main.go:141] libmachine: (custom-flannel-221184) Creating domain...
	I1007 13:57:27.718798  810702 main.go:141] libmachine: (custom-flannel-221184) define libvirt domain using xml: 
	I1007 13:57:27.718825  810702 main.go:141] libmachine: (custom-flannel-221184) <domain type='kvm'>
	I1007 13:57:27.718836  810702 main.go:141] libmachine: (custom-flannel-221184)   <name>custom-flannel-221184</name>
	I1007 13:57:27.718844  810702 main.go:141] libmachine: (custom-flannel-221184)   <memory unit='MiB'>3072</memory>
	I1007 13:57:27.718852  810702 main.go:141] libmachine: (custom-flannel-221184)   <vcpu>2</vcpu>
	I1007 13:57:27.718862  810702 main.go:141] libmachine: (custom-flannel-221184)   <features>
	I1007 13:57:27.718880  810702 main.go:141] libmachine: (custom-flannel-221184)     <acpi/>
	I1007 13:57:27.718892  810702 main.go:141] libmachine: (custom-flannel-221184)     <apic/>
	I1007 13:57:27.718900  810702 main.go:141] libmachine: (custom-flannel-221184)     <pae/>
	I1007 13:57:27.718918  810702 main.go:141] libmachine: (custom-flannel-221184)     
	I1007 13:57:27.718928  810702 main.go:141] libmachine: (custom-flannel-221184)   </features>
	I1007 13:57:27.718940  810702 main.go:141] libmachine: (custom-flannel-221184)   <cpu mode='host-passthrough'>
	I1007 13:57:27.718947  810702 main.go:141] libmachine: (custom-flannel-221184)   
	I1007 13:57:27.718957  810702 main.go:141] libmachine: (custom-flannel-221184)   </cpu>
	I1007 13:57:27.718966  810702 main.go:141] libmachine: (custom-flannel-221184)   <os>
	I1007 13:57:27.718974  810702 main.go:141] libmachine: (custom-flannel-221184)     <type>hvm</type>
	I1007 13:57:27.718984  810702 main.go:141] libmachine: (custom-flannel-221184)     <boot dev='cdrom'/>
	I1007 13:57:27.718992  810702 main.go:141] libmachine: (custom-flannel-221184)     <boot dev='hd'/>
	I1007 13:57:27.719012  810702 main.go:141] libmachine: (custom-flannel-221184)     <bootmenu enable='no'/>
	I1007 13:57:27.719023  810702 main.go:141] libmachine: (custom-flannel-221184)   </os>
	I1007 13:57:27.719029  810702 main.go:141] libmachine: (custom-flannel-221184)   <devices>
	I1007 13:57:27.719042  810702 main.go:141] libmachine: (custom-flannel-221184)     <disk type='file' device='cdrom'>
	I1007 13:57:27.719059  810702 main.go:141] libmachine: (custom-flannel-221184)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184/boot2docker.iso'/>
	I1007 13:57:27.719072  810702 main.go:141] libmachine: (custom-flannel-221184)       <target dev='hdc' bus='scsi'/>
	I1007 13:57:27.719084  810702 main.go:141] libmachine: (custom-flannel-221184)       <readonly/>
	I1007 13:57:27.719095  810702 main.go:141] libmachine: (custom-flannel-221184)     </disk>
	I1007 13:57:27.719106  810702 main.go:141] libmachine: (custom-flannel-221184)     <disk type='file' device='disk'>
	I1007 13:57:27.719122  810702 main.go:141] libmachine: (custom-flannel-221184)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:57:27.719148  810702 main.go:141] libmachine: (custom-flannel-221184)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/custom-flannel-221184/custom-flannel-221184.rawdisk'/>
	I1007 13:57:27.719179  810702 main.go:141] libmachine: (custom-flannel-221184)       <target dev='hda' bus='virtio'/>
	I1007 13:57:27.719207  810702 main.go:141] libmachine: (custom-flannel-221184)     </disk>
	I1007 13:57:27.719222  810702 main.go:141] libmachine: (custom-flannel-221184)     <interface type='network'>
	I1007 13:57:27.719254  810702 main.go:141] libmachine: (custom-flannel-221184)       <source network='mk-custom-flannel-221184'/>
	I1007 13:57:27.719268  810702 main.go:141] libmachine: (custom-flannel-221184)       <model type='virtio'/>
	I1007 13:57:27.719279  810702 main.go:141] libmachine: (custom-flannel-221184)     </interface>
	I1007 13:57:27.719288  810702 main.go:141] libmachine: (custom-flannel-221184)     <interface type='network'>
	I1007 13:57:27.719299  810702 main.go:141] libmachine: (custom-flannel-221184)       <source network='default'/>
	I1007 13:57:27.719307  810702 main.go:141] libmachine: (custom-flannel-221184)       <model type='virtio'/>
	I1007 13:57:27.719316  810702 main.go:141] libmachine: (custom-flannel-221184)     </interface>
	I1007 13:57:27.719325  810702 main.go:141] libmachine: (custom-flannel-221184)     <serial type='pty'>
	I1007 13:57:27.719335  810702 main.go:141] libmachine: (custom-flannel-221184)       <target port='0'/>
	I1007 13:57:27.719345  810702 main.go:141] libmachine: (custom-flannel-221184)     </serial>
	I1007 13:57:27.719360  810702 main.go:141] libmachine: (custom-flannel-221184)     <console type='pty'>
	I1007 13:57:27.719381  810702 main.go:141] libmachine: (custom-flannel-221184)       <target type='serial' port='0'/>
	I1007 13:57:27.719396  810702 main.go:141] libmachine: (custom-flannel-221184)     </console>
	I1007 13:57:27.719429  810702 main.go:141] libmachine: (custom-flannel-221184)     <rng model='virtio'>
	I1007 13:57:27.719455  810702 main.go:141] libmachine: (custom-flannel-221184)       <backend model='random'>/dev/random</backend>
	I1007 13:57:27.719467  810702 main.go:141] libmachine: (custom-flannel-221184)     </rng>
	I1007 13:57:27.719477  810702 main.go:141] libmachine: (custom-flannel-221184)     
	I1007 13:57:27.719486  810702 main.go:141] libmachine: (custom-flannel-221184)     
	I1007 13:57:27.719496  810702 main.go:141] libmachine: (custom-flannel-221184)   </devices>
	I1007 13:57:27.719503  810702 main.go:141] libmachine: (custom-flannel-221184) </domain>
	I1007 13:57:27.719514  810702 main.go:141] libmachine: (custom-flannel-221184) 
	I1007 13:57:27.726802  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a9:ca:38 in network default
	I1007 13:57:27.727502  810702 main.go:141] libmachine: (custom-flannel-221184) Ensuring networks are active...
	I1007 13:57:27.727532  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:27.728936  810702 main.go:141] libmachine: (custom-flannel-221184) Ensuring network default is active
	I1007 13:57:27.729364  810702 main.go:141] libmachine: (custom-flannel-221184) Ensuring network mk-custom-flannel-221184 is active
	I1007 13:57:27.730178  810702 main.go:141] libmachine: (custom-flannel-221184) Getting domain xml...
	I1007 13:57:27.731221  810702 main.go:141] libmachine: (custom-flannel-221184) Creating domain...
	I1007 13:57:25.563054  809501 main.go:141] libmachine: (calico-221184) DBG | Getting to WaitForSSH function...
	I1007 13:57:25.565324  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.565825  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:25.565867  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.566056  809501 main.go:141] libmachine: (calico-221184) DBG | Using SSH client type: external
	I1007 13:57:25.566089  809501 main.go:141] libmachine: (calico-221184) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa (-rw-------)
	I1007 13:57:25.566117  809501 main.go:141] libmachine: (calico-221184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:57:25.566125  809501 main.go:141] libmachine: (calico-221184) DBG | About to run SSH command:
	I1007 13:57:25.566153  809501 main.go:141] libmachine: (calico-221184) DBG | exit 0
	I1007 13:57:25.694377  809501 main.go:141] libmachine: (calico-221184) DBG | SSH cmd err, output: <nil>: 
	I1007 13:57:25.694638  809501 main.go:141] libmachine: (calico-221184) KVM machine creation complete!
	I1007 13:57:25.694959  809501 main.go:141] libmachine: (calico-221184) Calling .GetConfigRaw
	I1007 13:57:25.695525  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:25.695775  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:25.695977  809501 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:57:25.695991  809501 main.go:141] libmachine: (calico-221184) Calling .GetState
	I1007 13:57:25.697283  809501 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:57:25.697303  809501 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:57:25.697311  809501 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:57:25.697319  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:25.699867  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.700178  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:25.700208  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.700351  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:25.700555  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:25.700713  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:25.700861  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:25.701065  809501 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:25.701309  809501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I1007 13:57:25.701324  809501 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:57:25.813897  809501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:57:25.813924  809501 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:57:25.813937  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:25.816912  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.817260  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:25.817304  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.817465  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:25.817696  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:25.817891  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:25.818071  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:25.818266  809501 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:25.818446  809501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I1007 13:57:25.818458  809501 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:57:25.931198  809501 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:57:25.931304  809501 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:57:25.931320  809501 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:57:25.931334  809501 main.go:141] libmachine: (calico-221184) Calling .GetMachineName
	I1007 13:57:25.931602  809501 buildroot.go:166] provisioning hostname "calico-221184"
	I1007 13:57:25.931631  809501 main.go:141] libmachine: (calico-221184) Calling .GetMachineName
	I1007 13:57:25.931826  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:25.934294  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.934695  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:25.934721  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:25.934895  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:25.935081  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:25.935217  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:25.935371  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:25.935536  809501 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:25.935757  809501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I1007 13:57:25.935774  809501 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-221184 && echo "calico-221184" | sudo tee /etc/hostname
	I1007 13:57:26.058482  809501 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-221184
	
	I1007 13:57:26.058517  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.061569  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.062077  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.062111  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.062345  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:26.062537  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.062733  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.062900  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:26.063083  809501 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:26.063351  809501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I1007 13:57:26.063379  809501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-221184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-221184/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-221184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:57:26.183532  809501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:57:26.183568  809501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:57:26.183653  809501 buildroot.go:174] setting up certificates
	I1007 13:57:26.183669  809501 provision.go:84] configureAuth start
	I1007 13:57:26.183689  809501 main.go:141] libmachine: (calico-221184) Calling .GetMachineName
	I1007 13:57:26.184012  809501 main.go:141] libmachine: (calico-221184) Calling .GetIP
	I1007 13:57:26.187064  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.187484  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.187512  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.187673  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.190107  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.190438  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.190465  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.190679  809501 provision.go:143] copyHostCerts
	I1007 13:57:26.190753  809501 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:57:26.190778  809501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:57:26.190847  809501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:57:26.190972  809501 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:57:26.190988  809501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:57:26.191017  809501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:57:26.191092  809501 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:57:26.191101  809501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:57:26.191125  809501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:57:26.191212  809501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.calico-221184 san=[127.0.0.1 192.168.39.199 calico-221184 localhost minikube]
	I1007 13:57:26.308268  809501 provision.go:177] copyRemoteCerts
	I1007 13:57:26.308347  809501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:57:26.308381  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.311253  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.311663  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.311695  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.311959  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:26.312172  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.312348  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:26.312509  809501 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa Username:docker}
	I1007 13:57:26.400869  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:57:26.427736  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 13:57:26.456355  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:57:26.485084  809501 provision.go:87] duration metric: took 301.397655ms to configureAuth
	I1007 13:57:26.485117  809501 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:57:26.485306  809501 config.go:182] Loaded profile config "calico-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:57:26.485396  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.488021  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.488441  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.488474  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.488627  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:26.488836  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.488994  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.489162  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:26.489336  809501 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:26.489517  809501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I1007 13:57:26.489531  809501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:57:26.731563  809501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:57:26.731592  809501 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:57:26.731600  809501 main.go:141] libmachine: (calico-221184) Calling .GetURL
	I1007 13:57:26.733127  809501 main.go:141] libmachine: (calico-221184) DBG | Using libvirt version 6000000
	I1007 13:57:26.735089  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.735390  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.735417  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.735577  809501 main.go:141] libmachine: Docker is up and running!
	I1007 13:57:26.735592  809501 main.go:141] libmachine: Reticulating splines...
	I1007 13:57:26.735599  809501 client.go:171] duration metric: took 24.35941076s to LocalClient.Create
	I1007 13:57:26.735623  809501 start.go:167] duration metric: took 24.359479244s to libmachine.API.Create "calico-221184"
	I1007 13:57:26.735633  809501 start.go:293] postStartSetup for "calico-221184" (driver="kvm2")
	I1007 13:57:26.735645  809501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:57:26.735665  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:26.735937  809501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:57:26.735964  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.738474  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.738810  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.738829  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.739009  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:26.739216  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.739367  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:26.739518  809501 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa Username:docker}
	I1007 13:57:26.825543  809501 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:57:26.830438  809501 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:57:26.830474  809501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:57:26.830560  809501 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:57:26.830664  809501 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:57:26.830810  809501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:57:26.843225  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:57:26.870462  809501 start.go:296] duration metric: took 134.780013ms for postStartSetup
	I1007 13:57:26.870535  809501 main.go:141] libmachine: (calico-221184) Calling .GetConfigRaw
	I1007 13:57:26.871168  809501 main.go:141] libmachine: (calico-221184) Calling .GetIP
	I1007 13:57:26.873877  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.874329  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.874362  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.874619  809501 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/config.json ...
	I1007 13:57:26.874826  809501 start.go:128] duration metric: took 24.52290045s to createHost
	I1007 13:57:26.874850  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.877150  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.877591  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.877620  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.877775  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:26.877968  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.878142  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:26.878263  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:26.878416  809501 main.go:141] libmachine: Using SSH client type: native
	I1007 13:57:26.878648  809501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I1007 13:57:26.878661  809501 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:57:26.991747  809501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728309446.967893203
	
	I1007 13:57:26.991781  809501 fix.go:216] guest clock: 1728309446.967893203
	I1007 13:57:26.991792  809501 fix.go:229] Guest: 2024-10-07 13:57:26.967893203 +0000 UTC Remote: 2024-10-07 13:57:26.874839694 +0000 UTC m=+48.425907712 (delta=93.053509ms)
	I1007 13:57:26.991822  809501 fix.go:200] guest clock delta is within tolerance: 93.053509ms
	I1007 13:57:26.991830  809501 start.go:83] releasing machines lock for "calico-221184", held for 24.640115917s
	I1007 13:57:26.991864  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:26.992198  809501 main.go:141] libmachine: (calico-221184) Calling .GetIP
	I1007 13:57:26.995906  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.996240  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:26.996266  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:26.996470  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:26.997144  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:26.997333  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:26.997457  809501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:57:26.997536  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:26.997548  809501 ssh_runner.go:195] Run: cat /version.json
	I1007 13:57:26.997580  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:27.000648  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:27.000896  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:27.000969  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:27.001005  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:27.001175  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:27.001347  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:27.001371  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:27.001392  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:27.001562  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:27.001585  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:27.001703  809501 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa Username:docker}
	I1007 13:57:27.001790  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:27.001929  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:27.002111  809501 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa Username:docker}
	I1007 13:57:27.109763  809501 ssh_runner.go:195] Run: systemctl --version
	I1007 13:57:27.118882  809501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:57:27.289814  809501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:57:27.296766  809501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:57:27.296834  809501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:57:27.315305  809501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:57:27.315338  809501 start.go:495] detecting cgroup driver to use...
	I1007 13:57:27.315423  809501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:57:27.333003  809501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:57:27.351362  809501 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:57:27.351441  809501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:57:27.368828  809501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:57:27.385153  809501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:57:27.516262  809501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:57:27.667281  809501 docker.go:233] disabling docker service ...
	I1007 13:57:27.667372  809501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:57:27.682266  809501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:57:27.696062  809501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:57:27.853376  809501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:57:27.990761  809501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:57:28.006061  809501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:57:28.032495  809501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:57:28.032557  809501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.051485  809501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:57:28.051566  809501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.063966  809501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.078161  809501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.096840  809501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:57:28.109480  809501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.121508  809501 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.141808  809501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:57:28.152979  809501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:57:28.163026  809501 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:57:28.163099  809501 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:57:28.177762  809501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:57:28.192616  809501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:57:28.312042  809501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:57:28.418341  809501 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:57:28.418432  809501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:57:28.424332  809501 start.go:563] Will wait 60s for crictl version
	I1007 13:57:28.424391  809501 ssh_runner.go:195] Run: which crictl
	I1007 13:57:28.429115  809501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:57:28.476353  809501 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:57:28.476452  809501 ssh_runner.go:195] Run: crio --version
	I1007 13:57:28.510328  809501 ssh_runner.go:195] Run: crio --version
	I1007 13:57:28.547740  809501 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:57:28.524284  809201 node_ready.go:53] node "kindnet-221184" has status "Ready":"False"
	I1007 13:57:31.024507  809201 node_ready.go:53] node "kindnet-221184" has status "Ready":"False"
	I1007 13:57:28.128454  810702 main.go:141] libmachine: (custom-flannel-221184) Waiting to get IP...
	I1007 13:57:28.129457  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:28.129958  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:28.130006  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:28.129945  811115 retry.go:31] will retry after 293.19627ms: waiting for machine to come up
	I1007 13:57:28.424612  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:28.425211  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:28.425239  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:28.425157  811115 retry.go:31] will retry after 327.038052ms: waiting for machine to come up
	I1007 13:57:28.753745  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:28.754315  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:28.754347  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:28.754253  811115 retry.go:31] will retry after 324.522097ms: waiting for machine to come up
	I1007 13:57:29.080997  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:29.081631  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:29.081664  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:29.081529  811115 retry.go:31] will retry after 421.189914ms: waiting for machine to come up
	I1007 13:57:29.504391  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:29.504894  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:29.504931  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:29.504841  811115 retry.go:31] will retry after 479.730308ms: waiting for machine to come up
	I1007 13:57:29.987072  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:29.987613  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:29.987641  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:29.987567  811115 retry.go:31] will retry after 846.13235ms: waiting for machine to come up
	I1007 13:57:30.835450  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:30.836022  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:30.836055  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:30.835950  811115 retry.go:31] will retry after 1.186974634s: waiting for machine to come up
	I1007 13:57:32.025070  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:32.025652  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:32.025680  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:32.025613  811115 retry.go:31] will retry after 951.250028ms: waiting for machine to come up
	I1007 13:57:28.548936  809501 main.go:141] libmachine: (calico-221184) Calling .GetIP
	I1007 13:57:28.552492  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:28.553003  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:28.553074  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:28.553365  809501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 13:57:28.558174  809501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:57:28.576086  809501 kubeadm.go:883] updating cluster {Name:calico-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:calico-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:57:28.576202  809501 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:57:28.576279  809501 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:57:28.622957  809501 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:57:28.623047  809501 ssh_runner.go:195] Run: which lz4
	I1007 13:57:28.628221  809501 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:57:28.633346  809501 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:57:28.633394  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:57:30.248548  809501 crio.go:462] duration metric: took 1.620387709s to copy over tarball
	I1007 13:57:30.248665  809501 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:57:32.681634  809501 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.432926397s)
	I1007 13:57:32.681680  809501 crio.go:469] duration metric: took 2.43309472s to extract the tarball
	I1007 13:57:32.681688  809501 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:57:32.740337  809501 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:57:32.790136  809501 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:57:32.790171  809501 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:57:32.790183  809501 kubeadm.go:934] updating node { 192.168.39.199 8443 v1.31.1 crio true true} ...
	I1007 13:57:32.790412  809501 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-221184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:calico-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1007 13:57:32.790513  809501 ssh_runner.go:195] Run: crio config
	I1007 13:57:32.844544  809501 cni.go:84] Creating CNI manager for "calico"
	I1007 13:57:32.844574  809501 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:57:32.844597  809501 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-221184 NodeName:calico-221184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:57:32.844756  809501 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-221184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:57:32.844841  809501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:57:32.857877  809501 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:57:32.857952  809501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:57:32.869226  809501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 13:57:32.889637  809501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:57:32.908846  809501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 13:57:32.929508  809501 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I1007 13:57:32.935594  809501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:57:32.951347  809501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:57:33.089793  809501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:57:33.111813  809501 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184 for IP: 192.168.39.199
	I1007 13:57:33.111844  809501 certs.go:194] generating shared ca certs ...
	I1007 13:57:33.111867  809501 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.112061  809501 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:57:33.112116  809501 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:57:33.112129  809501 certs.go:256] generating profile certs ...
	I1007 13:57:33.112231  809501 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/client.key
	I1007 13:57:33.112251  809501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/client.crt with IP's: []
	I1007 13:57:33.312076  809501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/client.crt ...
	I1007 13:57:33.312134  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/client.crt: {Name:mk6a16248b46225b6a8a66865fba07da44e9f435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.312349  809501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/client.key ...
	I1007 13:57:33.312365  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/client.key: {Name:mk307d8b6e8d824a995788bd1aa49b988f8de39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.312493  809501 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.key.3519c45c
	I1007 13:57:33.312520  809501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.crt.3519c45c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.199]
	I1007 13:57:33.475243  809501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.crt.3519c45c ...
	I1007 13:57:33.475280  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.crt.3519c45c: {Name:mk11a3afbecac048980e6bd69a8768d759d5a3d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.475461  809501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.key.3519c45c ...
	I1007 13:57:33.475476  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.key.3519c45c: {Name:mk54540901ee4a9425033bf76aa22ee7480c2a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.475569  809501 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.crt.3519c45c -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.crt
	I1007 13:57:33.475647  809501 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.key.3519c45c -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.key
	I1007 13:57:33.475707  809501 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.key
	I1007 13:57:33.475720  809501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.crt with IP's: []
	I1007 13:57:33.707747  809501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.crt ...
	I1007 13:57:33.707794  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.crt: {Name:mk8eae2e0e005335578bd6767e81209d85d175aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.707999  809501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.key ...
	I1007 13:57:33.708017  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.key: {Name:mk2ff7b93180d0370bea98937a19aceb9c56731a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:33.708218  809501 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:57:33.708275  809501 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:57:33.708292  809501 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:57:33.708328  809501 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:57:33.708364  809501 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:57:33.708395  809501 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:57:33.708451  809501 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:57:33.709133  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:57:33.738500  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:57:33.766716  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:57:33.799138  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:57:33.836471  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 13:57:33.867072  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:57:33.892812  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:57:33.921784  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/calico-221184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:57:33.952247  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:57:33.979861  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:57:34.009113  809501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:57:34.039072  809501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:57:34.060229  809501 ssh_runner.go:195] Run: openssl version
	I1007 13:57:34.067730  809501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:57:34.080195  809501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:57:34.085681  809501 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:57:34.085783  809501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:57:34.094373  809501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:57:34.106205  809501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:57:34.118450  809501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:57:34.124054  809501 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:57:34.124146  809501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:57:34.131428  809501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:57:34.144052  809501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:57:34.159344  809501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:57:34.165151  809501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:57:34.165240  809501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:57:34.171873  809501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:57:34.184111  809501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:57:34.189482  809501 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:57:34.189561  809501 kubeadm.go:392] StartCluster: {Name:calico-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:calico-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:57:34.189682  809501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:57:34.189749  809501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:57:34.233068  809501 cri.go:89] found id: ""
	I1007 13:57:34.233145  809501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:57:34.244614  809501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:57:34.256222  809501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:57:34.267822  809501 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:57:34.267848  809501 kubeadm.go:157] found existing configuration files:
	
	I1007 13:57:34.267909  809501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:57:34.278233  809501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:57:34.278316  809501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:57:34.293288  809501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:57:34.307580  809501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:57:34.307658  809501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:57:34.318616  809501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:57:34.329447  809501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:57:34.329584  809501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:57:34.340493  809501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:57:34.350561  809501 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:57:34.350627  809501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:57:34.361580  809501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:57:34.426553  809501 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:57:34.426780  809501 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:57:34.571074  809501 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:57:34.571252  809501 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:57:34.571389  809501 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:57:34.580240  809501 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:57:33.024748  809201 node_ready.go:53] node "kindnet-221184" has status "Ready":"False"
	I1007 13:57:35.525728  809201 node_ready.go:53] node "kindnet-221184" has status "Ready":"False"
	I1007 13:57:36.523406  809201 node_ready.go:49] node "kindnet-221184" has status "Ready":"True"
	I1007 13:57:36.523443  809201 node_ready.go:38] duration metric: took 12.003816315s for node "kindnet-221184" to be "Ready" ...
	I1007 13:57:36.523459  809201 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:57:36.535216  809201 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-wqk5v" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:34.673740  809501 out.go:235]   - Generating certificates and keys ...
	I1007 13:57:34.673907  809501 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:57:34.674066  809501 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:57:34.760622  809501 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:57:34.829986  809501 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:57:34.928759  809501 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:57:35.161968  809501 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:57:35.479742  809501 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:57:35.479971  809501 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-221184 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I1007 13:57:35.616920  809501 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:57:35.617133  809501 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-221184 localhost] and IPs [192.168.39.199 127.0.0.1 ::1]
	I1007 13:57:35.964048  809501 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:57:36.030206  809501 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:57:36.104947  809501 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:57:36.105037  809501 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:57:36.346374  809501 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:57:36.638820  809501 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:57:36.930892  809501 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:57:37.211972  809501 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:57:37.522933  809501 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:57:37.523747  809501 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:57:37.528072  809501 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:57:32.978523  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:32.979308  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:32.979339  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:32.979175  811115 retry.go:31] will retry after 1.58800339s: waiting for machine to come up
	I1007 13:57:34.569238  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:34.569891  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:34.569926  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:34.569828  811115 retry.go:31] will retry after 2.143060194s: waiting for machine to come up
	I1007 13:57:36.715338  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:36.716110  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:36.716137  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:36.716025  811115 retry.go:31] will retry after 2.892475145s: waiting for machine to come up
	I1007 13:57:37.530303  809501 out.go:235]   - Booting up control plane ...
	I1007 13:57:37.530497  809501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:57:37.530631  809501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:57:37.530748  809501 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:57:37.552361  809501 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:57:37.561820  809501 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:57:37.561897  809501 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:57:37.713977  809501 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:57:37.714157  809501 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:57:38.217570  809501 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.554304ms
	I1007 13:57:38.217661  809501 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:57:37.542647  809201 pod_ready.go:93] pod "coredns-7c65d6cfc9-wqk5v" in "kube-system" namespace has status "Ready":"True"
	I1007 13:57:37.542685  809201 pod_ready.go:82] duration metric: took 1.007431792s for pod "coredns-7c65d6cfc9-wqk5v" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.542700  809201 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.550387  809201 pod_ready.go:93] pod "etcd-kindnet-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:57:37.550497  809201 pod_ready.go:82] duration metric: took 7.786271ms for pod "etcd-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.550534  809201 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.566210  809201 pod_ready.go:93] pod "kube-apiserver-kindnet-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:57:37.566249  809201 pod_ready.go:82] duration metric: took 15.683627ms for pod "kube-apiserver-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.566264  809201 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.574059  809201 pod_ready.go:93] pod "kube-controller-manager-kindnet-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:57:37.574086  809201 pod_ready.go:82] duration metric: took 7.81379ms for pod "kube-controller-manager-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.574097  809201 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-jq4m7" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.725865  809201 pod_ready.go:93] pod "kube-proxy-jq4m7" in "kube-system" namespace has status "Ready":"True"
	I1007 13:57:37.725895  809201 pod_ready.go:82] duration metric: took 151.791181ms for pod "kube-proxy-jq4m7" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:37.725906  809201 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:38.124869  809201 pod_ready.go:93] pod "kube-scheduler-kindnet-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:57:38.124905  809201 pod_ready.go:82] duration metric: took 398.990613ms for pod "kube-scheduler-kindnet-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:57:38.124921  809201 pod_ready.go:39] duration metric: took 1.601443692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:57:38.124942  809201 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:57:38.125029  809201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:57:38.149405  809201 api_server.go:72] duration metric: took 14.751727387s to wait for apiserver process to appear ...
	I1007 13:57:38.149441  809201 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:57:38.149468  809201 api_server.go:253] Checking apiserver healthz at https://192.168.50.180:8443/healthz ...
	I1007 13:57:38.155531  809201 api_server.go:279] https://192.168.50.180:8443/healthz returned 200:
	ok
	I1007 13:57:38.157214  809201 api_server.go:141] control plane version: v1.31.1
	I1007 13:57:38.157248  809201 api_server.go:131] duration metric: took 7.79845ms to wait for apiserver health ...
	I1007 13:57:38.157261  809201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:57:38.328287  809201 system_pods.go:59] 8 kube-system pods found
	I1007 13:57:38.328331  809201 system_pods.go:61] "coredns-7c65d6cfc9-wqk5v" [69c907a0-58f5-4c35-b3f0-01a1bf7df642] Running
	I1007 13:57:38.328341  809201 system_pods.go:61] "etcd-kindnet-221184" [e506fd8b-0513-4ad3-83b3-f3569abed941] Running
	I1007 13:57:38.328348  809201 system_pods.go:61] "kindnet-nzzt6" [2a49bb3b-6601-4516-900d-f3138bc783bb] Running
	I1007 13:57:38.328354  809201 system_pods.go:61] "kube-apiserver-kindnet-221184" [a513b28a-13ee-4e48-91c6-19a7368ce5c1] Running
	I1007 13:57:38.328360  809201 system_pods.go:61] "kube-controller-manager-kindnet-221184" [94bf1743-07af-4060-9d1b-0d1cb84d79bb] Running
	I1007 13:57:38.328365  809201 system_pods.go:61] "kube-proxy-jq4m7" [decad43e-76ef-4c56-ab50-e9f20502a9eb] Running
	I1007 13:57:38.328370  809201 system_pods.go:61] "kube-scheduler-kindnet-221184" [3df4a00c-9fda-483f-937e-9a5913d83519] Running
	I1007 13:57:38.328375  809201 system_pods.go:61] "storage-provisioner" [6c352f55-804a-4eeb-ac46-719a0b6b65b7] Running
	I1007 13:57:38.328383  809201 system_pods.go:74] duration metric: took 171.115332ms to wait for pod list to return data ...
	I1007 13:57:38.328393  809201 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:57:38.524888  809201 default_sa.go:45] found service account: "default"
	I1007 13:57:38.524921  809201 default_sa.go:55] duration metric: took 196.521246ms for default service account to be created ...
	I1007 13:57:38.524932  809201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:57:38.727538  809201 system_pods.go:86] 8 kube-system pods found
	I1007 13:57:38.727575  809201 system_pods.go:89] "coredns-7c65d6cfc9-wqk5v" [69c907a0-58f5-4c35-b3f0-01a1bf7df642] Running
	I1007 13:57:38.727581  809201 system_pods.go:89] "etcd-kindnet-221184" [e506fd8b-0513-4ad3-83b3-f3569abed941] Running
	I1007 13:57:38.727585  809201 system_pods.go:89] "kindnet-nzzt6" [2a49bb3b-6601-4516-900d-f3138bc783bb] Running
	I1007 13:57:38.727589  809201 system_pods.go:89] "kube-apiserver-kindnet-221184" [a513b28a-13ee-4e48-91c6-19a7368ce5c1] Running
	I1007 13:57:38.727593  809201 system_pods.go:89] "kube-controller-manager-kindnet-221184" [94bf1743-07af-4060-9d1b-0d1cb84d79bb] Running
	I1007 13:57:38.727598  809201 system_pods.go:89] "kube-proxy-jq4m7" [decad43e-76ef-4c56-ab50-e9f20502a9eb] Running
	I1007 13:57:38.727601  809201 system_pods.go:89] "kube-scheduler-kindnet-221184" [3df4a00c-9fda-483f-937e-9a5913d83519] Running
	I1007 13:57:38.727605  809201 system_pods.go:89] "storage-provisioner" [6c352f55-804a-4eeb-ac46-719a0b6b65b7] Running
	I1007 13:57:38.727613  809201 system_pods.go:126] duration metric: took 202.673519ms to wait for k8s-apps to be running ...
	I1007 13:57:38.727620  809201 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:57:38.727666  809201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:57:38.747604  809201 system_svc.go:56] duration metric: took 19.968556ms WaitForService to wait for kubelet
	I1007 13:57:38.747646  809201 kubeadm.go:582] duration metric: took 15.349977409s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:57:38.747674  809201 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:57:38.924587  809201 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:57:38.924633  809201 node_conditions.go:123] node cpu capacity is 2
	I1007 13:57:38.924653  809201 node_conditions.go:105] duration metric: took 176.971071ms to run NodePressure ...
	I1007 13:57:38.924669  809201 start.go:241] waiting for startup goroutines ...
	I1007 13:57:38.924679  809201 start.go:246] waiting for cluster config update ...
	I1007 13:57:38.924696  809201 start.go:255] writing updated cluster config ...
	I1007 13:57:38.925088  809201 ssh_runner.go:195] Run: rm -f paused
	I1007 13:57:38.998948  809201 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:57:39.002861  809201 out.go:177] * Done! kubectl is now configured to use "kindnet-221184" cluster and "default" namespace by default
	I1007 13:57:39.611224  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:39.611805  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:39.611841  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:39.611772  811115 retry.go:31] will retry after 3.502475878s: waiting for machine to come up
	I1007 13:57:43.720872  809501 kubeadm.go:310] [api-check] The API server is healthy after 5.504696203s
	I1007 13:57:43.740636  809501 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:57:43.770760  809501 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:57:43.806800  809501 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:57:43.807090  809501 kubeadm.go:310] [mark-control-plane] Marking the node calico-221184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:57:43.828701  809501 kubeadm.go:310] [bootstrap-token] Using token: cp8283.ijlcmbv1cyj4hhn4
	I1007 13:57:43.830672  809501 out.go:235]   - Configuring RBAC rules ...
	I1007 13:57:43.830826  809501 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:57:43.840424  809501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:57:43.855357  809501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:57:43.861198  809501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:57:43.867235  809501 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:57:43.880008  809501 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:57:44.126237  809501 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:57:44.568072  809501 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:57:45.134221  809501 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:57:45.135712  809501 kubeadm.go:310] 
	I1007 13:57:45.135819  809501 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:57:45.135832  809501 kubeadm.go:310] 
	I1007 13:57:45.135965  809501 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:57:45.135983  809501 kubeadm.go:310] 
	I1007 13:57:45.136017  809501 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:57:45.136127  809501 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:57:45.136196  809501 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:57:45.136206  809501 kubeadm.go:310] 
	I1007 13:57:45.136282  809501 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:57:45.136295  809501 kubeadm.go:310] 
	I1007 13:57:45.136381  809501 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:57:45.136392  809501 kubeadm.go:310] 
	I1007 13:57:45.136471  809501 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:57:45.136586  809501 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:57:45.136677  809501 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:57:45.136691  809501 kubeadm.go:310] 
	I1007 13:57:45.136809  809501 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:57:45.136947  809501 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:57:45.136980  809501 kubeadm.go:310] 
	I1007 13:57:45.137094  809501 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cp8283.ijlcmbv1cyj4hhn4 \
	I1007 13:57:45.137239  809501 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:57:45.137272  809501 kubeadm.go:310] 	--control-plane 
	I1007 13:57:45.137282  809501 kubeadm.go:310] 
	I1007 13:57:45.137408  809501 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:57:45.137428  809501 kubeadm.go:310] 
	I1007 13:57:45.137549  809501 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cp8283.ijlcmbv1cyj4hhn4 \
	I1007 13:57:45.137677  809501 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:57:45.139513  809501 kubeadm.go:310] W1007 13:57:34.404192     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:57:45.139856  809501 kubeadm.go:310] W1007 13:57:34.405139     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:57:45.139996  809501 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:57:45.140046  809501 cni.go:84] Creating CNI manager for "calico"
	I1007 13:57:45.142201  809501 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1007 13:57:43.116384  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:43.117010  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:43.117029  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:43.116981  811115 retry.go:31] will retry after 3.644048048s: waiting for machine to come up
	I1007 13:57:46.763237  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | domain custom-flannel-221184 has defined MAC address 52:54:00:a1:cb:81 in network mk-custom-flannel-221184
	I1007 13:57:46.763812  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | unable to find current IP address of domain custom-flannel-221184 in network mk-custom-flannel-221184
	I1007 13:57:46.763839  810702 main.go:141] libmachine: (custom-flannel-221184) DBG | I1007 13:57:46.763776  811115 retry.go:31] will retry after 3.454902587s: waiting for machine to come up
	I1007 13:57:45.143910  809501 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 13:57:45.143930  809501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I1007 13:57:45.173649  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 13:57:46.814694  809501 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.640997035s)
	I1007 13:57:46.814754  809501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:57:46.814874  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:46.814925  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-221184 minikube.k8s.io/updated_at=2024_10_07T13_57_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=calico-221184 minikube.k8s.io/primary=true
	I1007 13:57:46.838832  809501 ops.go:34] apiserver oom_adj: -16
	I1007 13:57:46.969559  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:47.470328  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:47.969691  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:48.470529  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:48.969604  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:49.470403  809501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:57:49.628863  809501 kubeadm.go:1113] duration metric: took 2.814072131s to wait for elevateKubeSystemPrivileges
	I1007 13:57:49.628909  809501 kubeadm.go:394] duration metric: took 15.439354286s to StartCluster
	I1007 13:57:49.628960  809501 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:49.629075  809501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:57:49.630384  809501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:57:49.630617  809501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:57:49.630616  809501 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:57:49.630644  809501 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:57:49.630828  809501 addons.go:69] Setting default-storageclass=true in profile "calico-221184"
	I1007 13:57:49.630838  809501 config.go:182] Loaded profile config "calico-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:57:49.630801  809501 addons.go:69] Setting storage-provisioner=true in profile "calico-221184"
	I1007 13:57:49.630878  809501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-221184"
	I1007 13:57:49.630890  809501 addons.go:234] Setting addon storage-provisioner=true in "calico-221184"
	I1007 13:57:49.630956  809501 host.go:66] Checking if "calico-221184" exists ...
	I1007 13:57:49.631400  809501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:49.631443  809501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:49.631504  809501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:49.631559  809501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:49.638620  809501 out.go:177] * Verifying Kubernetes components...
	I1007 13:57:49.640498  809501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:57:49.650186  809501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I1007 13:57:49.650191  809501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
	I1007 13:57:49.650712  809501 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:49.650861  809501 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:49.651394  809501 main.go:141] libmachine: Using API Version  1
	I1007 13:57:49.651415  809501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:49.651560  809501 main.go:141] libmachine: Using API Version  1
	I1007 13:57:49.651577  809501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:49.651875  809501 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:49.652072  809501 main.go:141] libmachine: (calico-221184) Calling .GetState
	I1007 13:57:49.652131  809501 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:49.652770  809501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:49.652820  809501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:49.656480  809501 addons.go:234] Setting addon default-storageclass=true in "calico-221184"
	I1007 13:57:49.656537  809501 host.go:66] Checking if "calico-221184" exists ...
	I1007 13:57:49.656949  809501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:49.656982  809501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:49.679661  809501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I1007 13:57:49.679872  809501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I1007 13:57:49.680340  809501 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:49.680351  809501 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:49.680878  809501 main.go:141] libmachine: Using API Version  1
	I1007 13:57:49.680898  809501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:49.681268  809501 main.go:141] libmachine: Using API Version  1
	I1007 13:57:49.681291  809501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:49.681298  809501 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:49.681475  809501 main.go:141] libmachine: (calico-221184) Calling .GetState
	I1007 13:57:49.681851  809501 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:49.682480  809501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:57:49.682510  809501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:57:49.683589  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:49.685367  809501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:57:49.687397  809501 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:57:49.687425  809501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:57:49.687451  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:49.691218  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:49.691539  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:49.691562  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:49.691868  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:49.692085  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:49.692224  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:49.692319  809501 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa Username:docker}
	I1007 13:57:49.701673  809501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45293
	I1007 13:57:49.702427  809501 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:57:49.703020  809501 main.go:141] libmachine: Using API Version  1
	I1007 13:57:49.703049  809501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:57:49.703338  809501 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:57:49.703596  809501 main.go:141] libmachine: (calico-221184) Calling .GetState
	I1007 13:57:49.705279  809501 main.go:141] libmachine: (calico-221184) Calling .DriverName
	I1007 13:57:49.705528  809501 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:57:49.705549  809501 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:57:49.705571  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHHostname
	I1007 13:57:49.708255  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:49.708536  809501 main.go:141] libmachine: (calico-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:5e:47", ip: ""} in network mk-calico-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:57:17 +0000 UTC Type:0 Mac:52:54:00:2a:5e:47 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:calico-221184 Clientid:01:52:54:00:2a:5e:47}
	I1007 13:57:49.708562  809501 main.go:141] libmachine: (calico-221184) DBG | domain calico-221184 has defined IP address 192.168.39.199 and MAC address 52:54:00:2a:5e:47 in network mk-calico-221184
	I1007 13:57:49.708743  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHPort
	I1007 13:57:49.708894  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHKeyPath
	I1007 13:57:49.709009  809501 main.go:141] libmachine: (calico-221184) Calling .GetSSHUsername
	I1007 13:57:49.709095  809501 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/calico-221184/id_rsa Username:docker}
	I1007 13:57:49.955809  809501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:57:49.955975  809501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:57:50.000868  809501 node_ready.go:35] waiting up to 15m0s for node "calico-221184" to be "Ready" ...
	I1007 13:57:50.104102  809501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:57:50.187552  809501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:57:50.338327  809501 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 13:57:50.403480  809501 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:50.403511  809501 main.go:141] libmachine: (calico-221184) Calling .Close
	I1007 13:57:50.403834  809501 main.go:141] libmachine: (calico-221184) DBG | Closing plugin on server side
	I1007 13:57:50.403890  809501 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:50.403903  809501 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:50.403916  809501 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:50.403925  809501 main.go:141] libmachine: (calico-221184) Calling .Close
	I1007 13:57:50.404184  809501 main.go:141] libmachine: (calico-221184) DBG | Closing plugin on server side
	I1007 13:57:50.404236  809501 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:50.404249  809501 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:50.452322  809501 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:50.452349  809501 main.go:141] libmachine: (calico-221184) Calling .Close
	I1007 13:57:50.452681  809501 main.go:141] libmachine: (calico-221184) DBG | Closing plugin on server side
	I1007 13:57:50.452724  809501 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:50.452735  809501 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:50.845396  809501 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-221184" context rescaled to 1 replicas
	I1007 13:57:51.034077  809501 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:51.034113  809501 main.go:141] libmachine: (calico-221184) Calling .Close
	I1007 13:57:51.034473  809501 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:51.034491  809501 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:51.034501  809501 main.go:141] libmachine: Making call to close driver server
	I1007 13:57:51.034509  809501 main.go:141] libmachine: (calico-221184) Calling .Close
	I1007 13:57:51.034896  809501 main.go:141] libmachine: (calico-221184) DBG | Closing plugin on server side
	I1007 13:57:51.034939  809501 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:57:51.034949  809501 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:57:51.037156  809501 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.530860782Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d771bdef2ff46c6ddeb4a1a0764ba853f99bbca6763b8c5b2256ecff8258fa85,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-drcg5,Uid:c88368de-954a-484b-8332-a05bfb0b6c9b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308923453636992,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-drcg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88368de-954a-484b-8332-a05bfb0b6c9b,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:43.144835206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:23077570-0411-48e4-9f38-2933
e98132b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308923327900519,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T13:48:43.018984265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mrgdp,Uid:a412fc5b-c29a-403d-87c3-2d0d035890fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308921510644293,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:41.187306913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-szgtd,Uid:579c2478
-e31e-41a7-b18b-749e86c54764,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308921465470416,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:41.154946689Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&PodSandboxMetadata{Name:kube-proxy-jpvx5,Uid:df825f23-4b34-44f3-a641-905c8bdc25ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308921285228903,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:40.969814796Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-489319,Uid:9f08951ea541525829047ffe90f29a47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308910454335599,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.101:2379,kubernetes.io/config.hash: 9f08951ea541525829047ffe90f29a47,kubernetes.io/config.seen: 2024-10-07T13:48:30.005601205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:487d6489d11c81f7366fb8e953ed9f707e9
86af8c3d162cff86930ddddc2a722,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-489319,Uid:62651fa186d270c62f23f7d307fe1a21,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728308910452536465,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.101:8444,kubernetes.io/config.hash: 62651fa186d270c62f23f7d307fe1a21,kubernetes.io/config.seen: 2024-10-07T13:48:30.005602515Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-489319,Uid:1a78d6497a45d13aff1bdc0c052f5f6d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728308910442293869,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a78d6497a45d13aff1bdc0c052f5f6d,kubernetes.io/config.seen: 2024-10-07T13:48:30.005599810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-489319,Uid:899c94957ea4481f28dea1c0c559d6a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308910439587569,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 899c94957ea4481f28dea1c0c559d6a8,kubernetes.io/config.seen: 2024-10-07T13:48:30.005596206Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-489319,Uid:62651fa186d270c62f23f7d307fe1a21,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728308621912916703,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.101:8444,kubernetes.io/config.hash: 62651fa186d270c62f23f7d307fe1a21,kubernetes.io/config.s
een: 2024-10-07T13:43:41.424456710Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0c673e9d-a2f8-4901-9df6-2fd961516f69 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.532121006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1aa05d3-137d-4a69-8db3-2b57eebe3472 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.532197116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1aa05d3-137d-4a69-8db3-2b57eebe3472 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.532467870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1aa05d3-137d-4a69-8db3-2b57eebe3472 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.546812845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b02a26c0-11e2-4c54-bc60-ed36dec3a19d name=/runtime.v1.RuntimeService/Version
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.546954055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b02a26c0-11e2-4c54-bc60-ed36dec3a19d name=/runtime.v1.RuntimeService/Version
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.548831970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32262e9f-2111-4d72-af29-ad8a4b6a495f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.549820633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309472549783956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32262e9f-2111-4d72-af29-ad8a4b6a495f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.550579062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74c788fe-a9fd-4844-8ccb-b0a407cdb5a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.550644039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74c788fe-a9fd-4844-8ccb-b0a407cdb5a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.550878142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74c788fe-a9fd-4844-8ccb-b0a407cdb5a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.611449225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3560b910-1866-4e45-9232-954f182fa892 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.611557654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3560b910-1866-4e45-9232-954f182fa892 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.613074730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4760b1a5-9ae3-43e6-aa9f-64bcb7e2e404 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.613701624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309472613670479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4760b1a5-9ae3-43e6-aa9f-64bcb7e2e404 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.614498035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e468a7d4-6e81-41b1-aaf9-3beccfc27084 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.614574015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e468a7d4-6e81-41b1-aaf9-3beccfc27084 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.616747969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e468a7d4-6e81-41b1-aaf9-3beccfc27084 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.662980336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ba5189f-87c1-4022-a671-9fb5a307bad4 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.663118498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ba5189f-87c1-4022-a671-9fb5a307bad4 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.664760738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=448be100-269d-40ea-abf4-a6fb843bebd9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.665347590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309472665305884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=448be100-269d-40ea-abf4-a6fb843bebd9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.666282928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82449a4e-3201-4746-9a00-d4d21c35a2e9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.666355773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82449a4e-3201-4746-9a00-d4d21c35a2e9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:57:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 13:57:52.666599665Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82449a4e-3201-4746-9a00-d4d21c35a2e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	221460feca963       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   4a0fb542274af       storage-provisioner
	08241c405a16f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   39d37a287f810       coredns-7c65d6cfc9-szgtd
	2ca3fa3510acc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   94a04d87f5059       coredns-7c65d6cfc9-mrgdp
	327a40c7d2ddc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   ab5c5a8580645       kube-proxy-jpvx5
	bc9755b466e84       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   3b72698db0300       etcd-default-k8s-diff-port-489319
	4ebb50a700da6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   85bd497799d85       kube-scheduler-default-k8s-diff-port-489319
	951c910599f12       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   b758eee933967       kube-controller-manager-default-k8s-diff-port-489319
	9b5bced8cf581       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   487d6489d11c8       kube-apiserver-default-k8s-diff-port-489319
	99e283eccd53f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   2ca5d4c120c2c       kube-apiserver-default-k8s-diff-port-489319
	
	
	==> coredns [08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-489319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-489319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=default-k8s-diff-port-489319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_48_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-489319
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:53:52 +0000   Mon, 07 Oct 2024 13:48:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:53:52 +0000   Mon, 07 Oct 2024 13:48:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:53:52 +0000   Mon, 07 Oct 2024 13:48:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:53:52 +0000   Mon, 07 Oct 2024 13:48:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.101
	  Hostname:    default-k8s-diff-port-489319
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 322d1f8dd6734fdeb4ccbd498b03009c
	  System UUID:                322d1f8d-d673-4fde-b4cc-bd498b03009c
	  Boot ID:                    9a5d800d-8ecc-4df9-933a-cc537b29b76b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-mrgdp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-szgtd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-489319                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-489319             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-489319    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-jpvx5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-489319             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-drcg5                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-489319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-489319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-489319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-489319 event: Registered Node default-k8s-diff-port-489319 in Controller
	
	
	==> dmesg <==
	[  +0.052554] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042221] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944154] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.748913] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628467] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.355286] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.060842] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063310] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.202201] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.130526] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.325083] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.371883] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.072926] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.048086] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +5.643513] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.123872] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 7 13:48] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.146429] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +4.634340] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.927918] systemd-fstab-generator[2901]: Ignoring "noauto" option for root device
	[  +5.451772] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.080265] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +5.960559] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac] <==
	{"level":"info","ts":"2024-10-07T13:55:52.611505Z","caller":"traceutil/trace.go:171","msg":"trace[941815935] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:800; }","duration":"316.210911ms","start":"2024-10-07T13:55:52.295288Z","end":"2024-10-07T13:55:52.611499Z","steps":["trace[941815935] 'range keys from in-memory index tree'  (duration: 314.663368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:52.611543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:55:52.295235Z","time spent":"316.288117ms","remote":"127.0.0.1:53590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-07T13:55:52.610305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.873552ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:52.611727Z","caller":"traceutil/trace.go:171","msg":"trace[1708213575] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:800; }","duration":"290.299229ms","start":"2024-10-07T13:55:52.321421Z","end":"2024-10-07T13:55:52.611721Z","steps":["trace[1708213575] 'range keys from in-memory index tree'  (duration: 288.730934ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:55:52.978402Z","caller":"traceutil/trace.go:171","msg":"trace[116961734] transaction","detail":"{read_only:false; response_revision:801; number_of_response:1; }","duration":"361.171809ms","start":"2024-10-07T13:55:52.617215Z","end":"2024-10-07T13:55:52.978387Z","steps":["trace[116961734] 'process raft request'  (duration: 360.952673ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:52.978596Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:55:52.617195Z","time spent":"361.279745ms","remote":"127.0.0.1:53580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:799 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-07T13:55:53.430856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.946841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:53.431515Z","caller":"traceutil/trace.go:171","msg":"trace[1547217371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:801; }","duration":"135.613902ms","start":"2024-10-07T13:55:53.295883Z","end":"2024-10-07T13:55:53.431497Z","steps":["trace[1547217371] 'range keys from in-memory index tree'  (duration: 134.825179ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:53.431334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.140921ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:53.431694Z","caller":"traceutil/trace.go:171","msg":"trace[1439734625] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:801; }","duration":"110.507324ms","start":"2024-10-07T13:55:53.321173Z","end":"2024-10-07T13:55:53.431680Z","steps":["trace[1439734625] 'range keys from in-memory index tree'  (duration: 110.042491ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:56:21.765692Z","caller":"traceutil/trace.go:171","msg":"trace[1126789199] linearizableReadLoop","detail":"{readStateIndex:931; appliedIndex:930; }","duration":"472.574452ms","start":"2024-10-07T13:56:21.293100Z","end":"2024-10-07T13:56:21.765674Z","steps":["trace[1126789199] 'read index received'  (duration: 472.319399ms)","trace[1126789199] 'applied index is now lower than readState.Index'  (duration: 254.321µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:56:21.765840Z","caller":"traceutil/trace.go:171","msg":"trace[1646594423] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"626.585446ms","start":"2024-10-07T13:56:21.139233Z","end":"2024-10-07T13:56:21.765818Z","steps":["trace[1646594423] 'process raft request'  (duration: 626.25495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:56:21.765963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:56:21.139214Z","time spent":"626.653657ms","remote":"127.0.0.1:53580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:822 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-07T13:56:21.766212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"444.562947ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:56:21.766289Z","caller":"traceutil/trace.go:171","msg":"trace[1224456714] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:823; }","duration":"444.645948ms","start":"2024-10-07T13:56:21.321631Z","end":"2024-10-07T13:56:21.766277Z","steps":["trace[1224456714] 'agreement among raft nodes before linearized reading'  (duration: 444.547133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:56:21.766532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"473.432524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:56:21.766586Z","caller":"traceutil/trace.go:171","msg":"trace[1834092598] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"473.489455ms","start":"2024-10-07T13:56:21.293088Z","end":"2024-10-07T13:56:21.766577Z","steps":["trace[1834092598] 'agreement among raft nodes before linearized reading'  (duration: 473.416905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:56:21.766701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:56:21.292993Z","time spent":"473.633667ms","remote":"127.0.0.1:53590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-07T13:56:21.767163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.485698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:56:21.767231Z","caller":"traceutil/trace.go:171","msg":"trace[1125731827] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:823; }","duration":"265.5737ms","start":"2024-10-07T13:56:21.501648Z","end":"2024-10-07T13:56:21.767221Z","steps":["trace[1125731827] 'agreement among raft nodes before linearized reading'  (duration: 265.459982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:56:22.398132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.889759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:56:22.398217Z","caller":"traceutil/trace.go:171","msg":"trace[669368515] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"104.10896ms","start":"2024-10-07T13:56:22.294092Z","end":"2024-10-07T13:56:22.398201Z","steps":["trace[669368515] 'range keys from in-memory index tree'  (duration: 103.814324ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:57:10.296185Z","caller":"traceutil/trace.go:171","msg":"trace[2105724829] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"243.270392ms","start":"2024-10-07T13:57:10.052897Z","end":"2024-10-07T13:57:10.296167Z","steps":["trace[2105724829] 'process raft request'  (duration: 243.088417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:57:10.717681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.068661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:57:10.717881Z","caller":"traceutil/trace.go:171","msg":"trace[1929526015] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:862; }","duration":"216.280234ms","start":"2024-10-07T13:57:10.501587Z","end":"2024-10-07T13:57:10.717867Z","steps":["trace[1929526015] 'range keys from in-memory index tree'  (duration: 215.70771ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:57:53 up 14 min,  0 users,  load average: 0.20, 0.33, 0.26
	Linux default-k8s-diff-port-489319 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b] <==
	W1007 13:48:22.875380       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.890184       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.892854       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.898292       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.959233       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:23.310768       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:23.313351       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:26.746608       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:26.942815       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.096161       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.101199       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.259618       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.348684       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.392327       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.456343       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.504562       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.603419       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.764338       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.797341       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.819723       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.842503       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.851301       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.887234       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:28.022747       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:28.057190       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588] <==
	W1007 13:53:34.311077       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:53:34.311188       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:53:34.312153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:53:34.312261       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:54:34.312852       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:34.313181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1007 13:54:34.312964       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:34.313309       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:54:34.314518       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:54:34.314545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:56:34.315115       1 handler_proxy.go:99] no RequestInfo found in the context
	W1007 13:56:34.315177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:56:34.315511       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1007 13:56:34.315664       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:56:34.316842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:56:34.316910       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2] <==
	E1007 13:52:40.304671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:52:40.779719       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:53:10.311640       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:53:10.788746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:53:40.318941       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:53:40.802642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:53:52.397803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-489319"
	E1007 13:54:10.327925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:54:10.812185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:54:40.334334       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:54:40.820979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:54:44.266130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="324.9µs"
	I1007 13:54:55.261549       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="86.392µs"
	E1007 13:55:10.340672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:55:10.828923       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:55:40.349499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:55:40.843390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:56:10.357761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:56:10.853138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:56:40.366305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:56:40.862709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:57:10.374449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:57:10.869984       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:57:40.383849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:57:40.883800       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 13:48:42.383238       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 13:48:42.493270       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.101"]
	E1007 13:48:42.493386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:48:42.736147       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 13:48:42.736194       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 13:48:42.736222       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:48:42.794249       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:48:42.794569       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:48:42.794601       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:48:42.800907       1 config.go:199] "Starting service config controller"
	I1007 13:48:42.800973       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:48:42.801056       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:48:42.801061       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:48:42.808188       1 config.go:328] "Starting node config controller"
	I1007 13:48:42.808222       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:48:42.901242       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:48:42.901308       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:48:42.910386       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328] <==
	W1007 13:48:34.472815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:48:34.472893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.481716       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:48:34.481777       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 13:48:34.494325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:48:34.494566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.551282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:48:34.551344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.551406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:48:34.551445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.563652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:48:34.563740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.619301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 13:48:34.619414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.661492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:48:34.661549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.738473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:48:34.738530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.775243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:48:34.775346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.788281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:48:34.788338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.850213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:48:34.850266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1007 13:48:36.747924       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:56:42 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:56:42.250324    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:56:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:56:46.469848    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309406469109497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:56:46.469958    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309406469109497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:55 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:56:55.244970    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:56:56 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:56:56.472544    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309416471758326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:56 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:56:56.472926    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309416471758326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:06 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:06.246304    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:57:06 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:06.475844    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309426475318721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:06 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:06.476286    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309426475318721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:16 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:16.479356    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309436478708723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:16 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:16.479869    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309436478708723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:19 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:19.243926    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:57:26 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:26.481909    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309446481300823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:26 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:26.481954    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309446481300823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:31 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:31.244844    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:36.276219    2908 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:36.484948    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309456484152529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:36 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:36.485079    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309456484152529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:45 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:45.244866    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:57:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:46.488148    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309466487499127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:57:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:57:46.488609    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309466487499127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81] <==
	I1007 13:48:43.615433       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:48:43.630738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:48:43.631434       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:48:43.667112       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:48:43.667461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489319_4674e1c8-6ac0-4df1-b56e-61cba430c30a!
	I1007 13:48:43.668680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e82ad34b-00ed-407b-b175-8d583bc7e6c6", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-489319_4674e1c8-6ac0-4df1-b56e-61cba430c30a became leader
	I1007 13:48:43.767621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489319_4674e1c8-6ac0-4df1-b56e-61cba430c30a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-drcg5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 describe pod metrics-server-6867b74b74-drcg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-489319 describe pod metrics-server-6867b74b74-drcg5: exit status 1 (103.130321ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-drcg5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-489319 describe pod metrics-server-6867b74b74-drcg5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (441.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-653322 -n embed-certs-653322
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-07 13:56:33.900004961 +0000 UTC m=+6527.098544942
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-653322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-653322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.156µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-653322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-653322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-653322 logs -n 25: (1.605140914s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC | 07 Oct 24 13:48 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:54 UTC | 07 Oct 24 13:54 UTC |
	| start   | -p newest-cni-006310 --memory=2200 --alsologtostderr   | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:54 UTC | 07 Oct 24 13:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:55 UTC | 07 Oct 24 13:55 UTC |
	| start   | -p auto-221184 --memory=3072                           | auto-221184                  | jenkins | v1.34.0 | 07 Oct 24 13:55 UTC | 07 Oct 24 13:56 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-006310             | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:55 UTC | 07 Oct 24 13:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-006310                                   | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:55 UTC | 07 Oct 24 13:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-006310                  | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:55 UTC | 07 Oct 24 13:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-006310 --memory=2200 --alsologtostderr   | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:55 UTC | 07 Oct 24 13:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p auto-221184 pgrep -a                                | auto-221184                  | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| image   | newest-cni-006310 image list                           | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC | 07 Oct 24 13:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-006310                                   | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:56 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:55:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:55:55.575175  808244 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:55:55.575448  808244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:55:55.575457  808244 out.go:358] Setting ErrFile to fd 2...
	I1007 13:55:55.575462  808244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:55:55.575653  808244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:55:55.576272  808244 out.go:352] Setting JSON to false
	I1007 13:55:55.577368  808244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13105,"bootTime":1728296251,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:55:55.577439  808244 start.go:139] virtualization: kvm guest
	I1007 13:55:55.579756  808244 out.go:177] * [newest-cni-006310] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:55:55.581286  808244 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:55:55.581357  808244 notify.go:220] Checking for updates...
	I1007 13:55:55.583963  808244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:55:55.585346  808244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:55:55.586545  808244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:55:55.587845  808244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:55:55.589192  808244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:55:55.590674  808244 config.go:182] Loaded profile config "newest-cni-006310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:55:55.591098  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:55:55.591175  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:55:55.606679  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I1007 13:55:55.607247  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:55:55.607885  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:55:55.607908  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:55:55.608285  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:55:55.608491  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:55:55.608774  808244 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:55:55.609130  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:55:55.609174  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:55:55.624963  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I1007 13:55:55.625505  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:55:55.626060  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:55:55.626090  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:55:55.626478  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:55:55.626681  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:55:55.664976  808244 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:55:55.666276  808244 start.go:297] selected driver: kvm2
	I1007 13:55:55.666296  808244 start.go:901] validating driver "kvm2" against &{Name:newest-cni-006310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-006310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:55:55.666423  808244 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:55:55.667097  808244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:55:55.667171  808244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:55:55.683141  808244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:55:55.683590  808244 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 13:55:55.683622  808244 cni.go:84] Creating CNI manager for ""
	I1007 13:55:55.683672  808244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:55:55.683722  808244 start.go:340] cluster config:
	{Name:newest-cni-006310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-006310 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:55:55.683843  808244 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:55:55.685992  808244 out.go:177] * Starting "newest-cni-006310" primary control-plane node in "newest-cni-006310" cluster
	I1007 13:55:55.687391  808244 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:55:55.687462  808244 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:55:55.687483  808244 cache.go:56] Caching tarball of preloaded images
	I1007 13:55:55.687578  808244 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:55:55.687592  808244 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:55:55.687730  808244 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/config.json ...
	I1007 13:55:55.688099  808244 start.go:360] acquireMachinesLock for newest-cni-006310: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:55:55.688170  808244 start.go:364] duration metric: took 40.865µs to acquireMachinesLock for "newest-cni-006310"
	I1007 13:55:55.688194  808244 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:55:55.688202  808244 fix.go:54] fixHost starting: 
	I1007 13:55:55.688497  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:55:55.688543  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:55:55.705243  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1007 13:55:55.705762  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:55:55.706345  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:55:55.706371  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:55:55.706669  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:55:55.706893  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:55:55.707034  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetState
	I1007 13:55:55.708914  808244 fix.go:112] recreateIfNeeded on newest-cni-006310: state=Stopped err=<nil>
	I1007 13:55:55.708969  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	W1007 13:55:55.709169  808244 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:55:55.711671  808244 out.go:177] * Restarting existing kvm2 VM for "newest-cni-006310" ...
	I1007 13:55:54.141020  807823 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:55:54.141211  807823 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-221184 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I1007 13:55:54.411533  807823 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:55:54.624546  807823 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:55:54.931854  807823 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:55:54.931974  807823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:55:55.030334  807823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:55:55.183281  807823 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:55:55.352146  807823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:55:55.932672  807823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:55:56.098169  807823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:55:56.098929  807823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:55:56.104248  807823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:55:56.106363  807823 out.go:235]   - Booting up control plane ...
	I1007 13:55:56.106485  807823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:55:56.106577  807823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:55:56.106986  807823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:55:56.126349  807823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:55:56.136080  807823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:55:56.136156  807823 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:55:56.273727  807823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:55:56.273888  807823 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:55:56.786754  807823 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.04048ms
	I1007 13:55:56.786872  807823 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:55:55.713252  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Start
	I1007 13:55:55.713546  808244 main.go:141] libmachine: (newest-cni-006310) Ensuring networks are active...
	I1007 13:55:55.714521  808244 main.go:141] libmachine: (newest-cni-006310) Ensuring network default is active
	I1007 13:55:55.714867  808244 main.go:141] libmachine: (newest-cni-006310) Ensuring network mk-newest-cni-006310 is active
	I1007 13:55:55.715325  808244 main.go:141] libmachine: (newest-cni-006310) Getting domain xml...
	I1007 13:55:55.716255  808244 main.go:141] libmachine: (newest-cni-006310) Creating domain...
	I1007 13:55:56.110566  808244 main.go:141] libmachine: (newest-cni-006310) Waiting to get IP...
	I1007 13:55:56.111456  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:56.112022  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:56.112131  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:56.112007  808278 retry.go:31] will retry after 302.412511ms: waiting for machine to come up
	I1007 13:55:56.416693  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:56.417255  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:56.417287  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:56.417205  808278 retry.go:31] will retry after 365.431304ms: waiting for machine to come up
	I1007 13:55:56.784343  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:56.785237  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:56.785269  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:56.785141  808278 retry.go:31] will retry after 450.053194ms: waiting for machine to come up
	I1007 13:55:57.236920  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:57.237487  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:57.237520  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:57.237423  808278 retry.go:31] will retry after 508.993753ms: waiting for machine to come up
	I1007 13:55:57.748404  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:57.748905  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:57.748937  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:57.748854  808278 retry.go:31] will retry after 734.537213ms: waiting for machine to come up
	I1007 13:55:58.484841  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:58.485422  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:58.485445  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:58.485370  808278 retry.go:31] will retry after 734.198545ms: waiting for machine to come up
	I1007 13:55:59.221217  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:59.221738  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:59.221763  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:59.221684  808278 retry.go:31] will retry after 1.174423527s: waiting for machine to come up
	I1007 13:56:00.398244  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:00.398771  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:56:00.398804  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:56:00.398743  808278 retry.go:31] will retry after 1.441578926s: waiting for machine to come up
	I1007 13:56:02.287649  807823 kubeadm.go:310] [api-check] The API server is healthy after 5.502523376s
	I1007 13:56:02.304616  807823 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:56:02.339965  807823 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:56:02.389906  807823 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:56:02.390167  807823 kubeadm.go:310] [mark-control-plane] Marking the node auto-221184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:56:02.409210  807823 kubeadm.go:310] [bootstrap-token] Using token: 9ljhyl.kfg2ayy4bqpm4ef1
	I1007 13:56:02.410991  807823 out.go:235]   - Configuring RBAC rules ...
	I1007 13:56:02.411160  807823 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:56:02.422608  807823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:56:02.442045  807823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:56:02.449396  807823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:56:02.458121  807823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:56:02.466693  807823 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:56:02.696264  807823 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:56:03.153477  807823 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:56:03.696116  807823 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:56:03.696145  807823 kubeadm.go:310] 
	I1007 13:56:03.696218  807823 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:56:03.696226  807823 kubeadm.go:310] 
	I1007 13:56:03.696351  807823 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:56:03.696363  807823 kubeadm.go:310] 
	I1007 13:56:03.696396  807823 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:56:03.696476  807823 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:56:03.696543  807823 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:56:03.696552  807823 kubeadm.go:310] 
	I1007 13:56:03.696632  807823 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:56:03.696641  807823 kubeadm.go:310] 
	I1007 13:56:03.696718  807823 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:56:03.696730  807823 kubeadm.go:310] 
	I1007 13:56:03.696798  807823 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:56:03.696895  807823 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:56:03.696961  807823 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:56:03.696968  807823 kubeadm.go:310] 
	I1007 13:56:03.697032  807823 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:56:03.697090  807823 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:56:03.697094  807823 kubeadm.go:310] 
	I1007 13:56:03.697156  807823 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9ljhyl.kfg2ayy4bqpm4ef1 \
	I1007 13:56:03.697234  807823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:56:03.697262  807823 kubeadm.go:310] 	--control-plane 
	I1007 13:56:03.697267  807823 kubeadm.go:310] 
	I1007 13:56:03.697336  807823 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:56:03.697340  807823 kubeadm.go:310] 
	I1007 13:56:03.697402  807823 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9ljhyl.kfg2ayy4bqpm4ef1 \
	I1007 13:56:03.697508  807823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:56:03.698782  807823 kubeadm.go:310] W1007 13:55:52.622067     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:56:03.699224  807823 kubeadm.go:310] W1007 13:55:52.622911     840 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:56:03.699393  807823 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:56:03.699439  807823 cni.go:84] Creating CNI manager for ""
	I1007 13:56:03.699452  807823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:56:03.701477  807823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:56:03.702959  807823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:56:03.715287  807823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:56:03.736430  807823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:56:03.736524  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:03.736626  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-221184 minikube.k8s.io/updated_at=2024_10_07T13_56_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=auto-221184 minikube.k8s.io/primary=true
	I1007 13:56:03.905923  807823 ops.go:34] apiserver oom_adj: -16
	I1007 13:56:03.906142  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:01.841740  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:01.842356  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:56:01.842387  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:56:01.842316  808278 retry.go:31] will retry after 1.245992222s: waiting for machine to come up
	I1007 13:56:03.089954  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:03.090457  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:56:03.090488  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:56:03.090398  808278 retry.go:31] will retry after 1.51454538s: waiting for machine to come up
	I1007 13:56:04.606084  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:04.606607  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:56:04.606633  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:56:04.606558  808278 retry.go:31] will retry after 2.076984717s: waiting for machine to come up
	I1007 13:56:04.407050  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:04.906218  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:05.406354  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:05.906255  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:06.407053  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:06.907015  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:07.406287  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:07.906762  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:08.406306  807823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:56:08.561514  807823 kubeadm.go:1113] duration metric: took 4.825078796s to wait for elevateKubeSystemPrivileges
	I1007 13:56:08.561552  807823 kubeadm.go:394] duration metric: took 16.176758807s to StartCluster
	I1007 13:56:08.561577  807823 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:08.561668  807823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:56:08.563472  807823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:08.563838  807823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:56:08.563831  807823 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:56:08.563864  807823 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:56:08.563973  807823 addons.go:69] Setting default-storageclass=true in profile "auto-221184"
	I1007 13:56:08.564040  807823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-221184"
	I1007 13:56:08.564089  807823 config.go:182] Loaded profile config "auto-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:56:08.563955  807823 addons.go:69] Setting storage-provisioner=true in profile "auto-221184"
	I1007 13:56:08.564155  807823 addons.go:234] Setting addon storage-provisioner=true in "auto-221184"
	I1007 13:56:08.564199  807823 host.go:66] Checking if "auto-221184" exists ...
	I1007 13:56:08.564528  807823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:08.564573  807823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:08.564695  807823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:08.564792  807823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:08.565568  807823 out.go:177] * Verifying Kubernetes components...
	I1007 13:56:08.567233  807823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:56:08.584145  807823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I1007 13:56:08.584792  807823 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:08.585385  807823 main.go:141] libmachine: Using API Version  1
	I1007 13:56:08.585414  807823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:08.585473  807823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I1007 13:56:08.585886  807823 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:08.586463  807823 main.go:141] libmachine: Using API Version  1
	I1007 13:56:08.586481  807823 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:08.586490  807823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:08.586735  807823 main.go:141] libmachine: (auto-221184) Calling .GetState
	I1007 13:56:08.586930  807823 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:08.587588  807823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:08.587631  807823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:08.591676  807823 addons.go:234] Setting addon default-storageclass=true in "auto-221184"
	I1007 13:56:08.591730  807823 host.go:66] Checking if "auto-221184" exists ...
	I1007 13:56:08.592168  807823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:08.592218  807823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:08.610192  807823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
	I1007 13:56:08.610858  807823 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:08.611618  807823 main.go:141] libmachine: Using API Version  1
	I1007 13:56:08.611640  807823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:08.612227  807823 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:08.612631  807823 main.go:141] libmachine: (auto-221184) Calling .GetState
	I1007 13:56:08.613913  807823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I1007 13:56:08.614591  807823 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:08.614856  807823 main.go:141] libmachine: (auto-221184) Calling .DriverName
	I1007 13:56:08.615170  807823 main.go:141] libmachine: Using API Version  1
	I1007 13:56:08.615195  807823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:08.615527  807823 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:08.615984  807823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:08.616035  807823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:08.616790  807823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:56:08.618747  807823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:56:08.618768  807823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:56:08.618790  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHHostname
	I1007 13:56:08.622697  807823 main.go:141] libmachine: (auto-221184) DBG | domain auto-221184 has defined MAC address 52:54:00:ff:b3:0c in network mk-auto-221184
	I1007 13:56:08.623167  807823 main.go:141] libmachine: (auto-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b3:0c", ip: ""} in network mk-auto-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:55:34 +0000 UTC Type:0 Mac:52:54:00:ff:b3:0c Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:auto-221184 Clientid:01:52:54:00:ff:b3:0c}
	I1007 13:56:08.623190  807823 main.go:141] libmachine: (auto-221184) DBG | domain auto-221184 has defined IP address 192.168.39.240 and MAC address 52:54:00:ff:b3:0c in network mk-auto-221184
	I1007 13:56:08.623533  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHPort
	I1007 13:56:08.623754  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHKeyPath
	I1007 13:56:08.623896  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHUsername
	I1007 13:56:08.624024  807823 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/auto-221184/id_rsa Username:docker}
	I1007 13:56:08.634704  807823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I1007 13:56:08.635209  807823 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:08.635781  807823 main.go:141] libmachine: Using API Version  1
	I1007 13:56:08.635803  807823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:08.636185  807823 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:08.636424  807823 main.go:141] libmachine: (auto-221184) Calling .GetState
	I1007 13:56:08.638458  807823 main.go:141] libmachine: (auto-221184) Calling .DriverName
	I1007 13:56:08.638690  807823 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:56:08.638704  807823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:56:08.638723  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHHostname
	I1007 13:56:08.642295  807823 main.go:141] libmachine: (auto-221184) DBG | domain auto-221184 has defined MAC address 52:54:00:ff:b3:0c in network mk-auto-221184
	I1007 13:56:08.642784  807823 main.go:141] libmachine: (auto-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:b3:0c", ip: ""} in network mk-auto-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:55:34 +0000 UTC Type:0 Mac:52:54:00:ff:b3:0c Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:auto-221184 Clientid:01:52:54:00:ff:b3:0c}
	I1007 13:56:08.642809  807823 main.go:141] libmachine: (auto-221184) DBG | domain auto-221184 has defined IP address 192.168.39.240 and MAC address 52:54:00:ff:b3:0c in network mk-auto-221184
	I1007 13:56:08.643149  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHPort
	I1007 13:56:08.643410  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHKeyPath
	I1007 13:56:08.643566  807823 main.go:141] libmachine: (auto-221184) Calling .GetSSHUsername
	I1007 13:56:08.643786  807823 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/auto-221184/id_rsa Username:docker}
	I1007 13:56:08.790297  807823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:56:08.839057  807823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:56:08.967990  807823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:56:09.074687  807823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:56:09.332714  807823 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 13:56:09.334504  807823 node_ready.go:35] waiting up to 15m0s for node "auto-221184" to be "Ready" ...
	I1007 13:56:09.348765  807823 node_ready.go:49] node "auto-221184" has status "Ready":"True"
	I1007 13:56:09.348796  807823 node_ready.go:38] duration metric: took 14.251877ms for node "auto-221184" to be "Ready" ...
	I1007 13:56:09.348810  807823 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:56:09.361353  807823 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-clmkb" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:09.631382  807823 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:09.631412  807823 main.go:141] libmachine: (auto-221184) Calling .Close
	I1007 13:56:09.631390  807823 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:09.631467  807823 main.go:141] libmachine: (auto-221184) Calling .Close
	I1007 13:56:09.631890  807823 main.go:141] libmachine: (auto-221184) DBG | Closing plugin on server side
	I1007 13:56:09.631905  807823 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:09.631909  807823 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:09.631916  807823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:09.631924  807823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:09.631932  807823 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:09.631943  807823 main.go:141] libmachine: (auto-221184) Calling .Close
	I1007 13:56:09.631934  807823 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:09.631995  807823 main.go:141] libmachine: (auto-221184) Calling .Close
	I1007 13:56:09.632210  807823 main.go:141] libmachine: (auto-221184) DBG | Closing plugin on server side
	I1007 13:56:09.632266  807823 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:09.632303  807823 main.go:141] libmachine: (auto-221184) DBG | Closing plugin on server side
	I1007 13:56:09.632313  807823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:09.632272  807823 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:09.632425  807823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:09.648513  807823 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:09.648534  807823 main.go:141] libmachine: (auto-221184) Calling .Close
	I1007 13:56:09.648923  807823 main.go:141] libmachine: (auto-221184) DBG | Closing plugin on server side
	I1007 13:56:09.649015  807823 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:09.649031  807823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:09.650904  807823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 13:56:06.685303  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:06.685964  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:56:06.685994  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:56:06.685916  808278 retry.go:31] will retry after 2.302261984s: waiting for machine to come up
	I1007 13:56:08.991361  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:08.991895  808244 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:56:08.991939  808244 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:56:08.991846  808278 retry.go:31] will retry after 3.707309224s: waiting for machine to come up
	I1007 13:56:09.652599  807823 addons.go:510] duration metric: took 1.088734345s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 13:56:09.837836  807823 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-221184" context rescaled to 1 replicas
	I1007 13:56:11.365023  807823 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-clmkb" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-clmkb" not found
	I1007 13:56:11.365059  807823 pod_ready.go:82] duration metric: took 2.003663921s for pod "coredns-7c65d6cfc9-clmkb" in "kube-system" namespace to be "Ready" ...
	E1007 13:56:11.365074  807823 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-clmkb" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-clmkb" not found
	I1007 13:56:11.365083  807823 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-xhzgj" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:13.372834  807823 pod_ready.go:93] pod "coredns-7c65d6cfc9-xhzgj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:56:13.372863  807823 pod_ready.go:82] duration metric: took 2.007771715s for pod "coredns-7c65d6cfc9-xhzgj" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:13.372877  807823 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:12.702458  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.703152  808244 main.go:141] libmachine: (newest-cni-006310) Found IP for machine: 192.168.72.175
	I1007 13:56:12.703377  808244 main.go:141] libmachine: (newest-cni-006310) Reserving static IP address...
	I1007 13:56:12.703435  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has current primary IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.703859  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "newest-cni-006310", mac: "52:54:00:d7:7d:b5", ip: "192.168.72.175"} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:12.703889  808244 main.go:141] libmachine: (newest-cni-006310) DBG | skip adding static IP to network mk-newest-cni-006310 - found existing host DHCP lease matching {name: "newest-cni-006310", mac: "52:54:00:d7:7d:b5", ip: "192.168.72.175"}
	I1007 13:56:12.703902  808244 main.go:141] libmachine: (newest-cni-006310) Reserved static IP address: 192.168.72.175
	I1007 13:56:12.703919  808244 main.go:141] libmachine: (newest-cni-006310) Waiting for SSH to be available...
	I1007 13:56:12.703930  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Getting to WaitForSSH function...
	I1007 13:56:12.706378  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.706784  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:12.706822  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.706893  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Using SSH client type: external
	I1007 13:56:12.706922  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa (-rw-------)
	I1007 13:56:12.706962  808244 main.go:141] libmachine: (newest-cni-006310) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:56:12.706993  808244 main.go:141] libmachine: (newest-cni-006310) DBG | About to run SSH command:
	I1007 13:56:12.707012  808244 main.go:141] libmachine: (newest-cni-006310) DBG | exit 0
	I1007 13:56:12.838405  808244 main.go:141] libmachine: (newest-cni-006310) DBG | SSH cmd err, output: <nil>: 
	I1007 13:56:12.838823  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetConfigRaw
	I1007 13:56:12.839475  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetIP
	I1007 13:56:12.842246  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.842591  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:12.842624  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.842912  808244 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/config.json ...
	I1007 13:56:12.843250  808244 machine.go:93] provisionDockerMachine start ...
	I1007 13:56:12.843276  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:12.843572  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:12.846125  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.846471  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:12.846502  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.846650  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:12.846861  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:12.846994  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:12.847118  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:12.847335  808244 main.go:141] libmachine: Using SSH client type: native
	I1007 13:56:12.847546  808244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1007 13:56:12.847562  808244 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:56:12.962787  808244 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:56:12.962823  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetMachineName
	I1007 13:56:12.963116  808244 buildroot.go:166] provisioning hostname "newest-cni-006310"
	I1007 13:56:12.963142  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetMachineName
	I1007 13:56:12.963342  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:12.966276  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.966599  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:12.966648  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:12.966751  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:12.966967  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:12.967139  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:12.967271  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:12.967422  808244 main.go:141] libmachine: Using SSH client type: native
	I1007 13:56:12.967611  808244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1007 13:56:12.967624  808244 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-006310 && echo "newest-cni-006310" | sudo tee /etc/hostname
	I1007 13:56:13.099127  808244 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-006310
	
	I1007 13:56:13.099167  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:13.102204  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.102588  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.102622  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.102855  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:13.103064  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.103234  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.103396  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:13.103577  808244 main.go:141] libmachine: Using SSH client type: native
	I1007 13:56:13.103822  808244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1007 13:56:13.103840  808244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-006310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006310/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-006310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:56:13.233183  808244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:56:13.233223  808244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:56:13.233266  808244 buildroot.go:174] setting up certificates
	I1007 13:56:13.233285  808244 provision.go:84] configureAuth start
	I1007 13:56:13.233300  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetMachineName
	I1007 13:56:13.233610  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetIP
	I1007 13:56:13.236767  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.237199  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.237230  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.237382  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:13.239986  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.240395  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.240420  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.240548  808244 provision.go:143] copyHostCerts
	I1007 13:56:13.240611  808244 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:56:13.240634  808244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:56:13.240684  808244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:56:13.240791  808244 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:56:13.240804  808244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:56:13.240824  808244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:56:13.240897  808244 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:56:13.240905  808244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:56:13.240923  808244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:56:13.240981  808244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006310 san=[127.0.0.1 192.168.72.175 localhost minikube newest-cni-006310]
	I1007 13:56:13.408810  808244 provision.go:177] copyRemoteCerts
	I1007 13:56:13.408872  808244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:56:13.408905  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:13.411798  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.412159  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.412193  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.412365  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:13.412537  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.412679  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:13.412834  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:13.504754  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:56:13.531182  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1007 13:56:13.559663  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:56:13.588217  808244 provision.go:87] duration metric: took 354.917941ms to configureAuth
	I1007 13:56:13.588253  808244 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:56:13.588504  808244 config.go:182] Loaded profile config "newest-cni-006310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:56:13.588608  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:13.591636  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.592092  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.592115  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.592390  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:13.592619  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.592807  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.593011  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:13.593193  808244 main.go:141] libmachine: Using SSH client type: native
	I1007 13:56:13.593373  808244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1007 13:56:13.593388  808244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:56:13.831497  808244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:56:13.831529  808244 machine.go:96] duration metric: took 988.259385ms to provisionDockerMachine
	I1007 13:56:13.831545  808244 start.go:293] postStartSetup for "newest-cni-006310" (driver="kvm2")
	I1007 13:56:13.831557  808244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:56:13.831578  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:13.831891  808244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:56:13.831921  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:13.834542  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.834947  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.834977  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.835152  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:13.835351  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.835528  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:13.835676  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:13.921627  808244 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:56:13.926462  808244 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:56:13.926494  808244 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:56:13.926570  808244 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:56:13.926666  808244 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:56:13.926781  808244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:56:13.936953  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:56:13.964804  808244 start.go:296] duration metric: took 133.241658ms for postStartSetup
	I1007 13:56:13.964851  808244 fix.go:56] duration metric: took 18.276649806s for fixHost
	I1007 13:56:13.964875  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:13.967599  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.967940  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:13.967964  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:13.968162  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:13.968414  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.968588  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:13.968737  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:13.968911  808244 main.go:141] libmachine: Using SSH client type: native
	I1007 13:56:13.969088  808244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1007 13:56:13.969098  808244 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:56:14.083536  808244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728309374.050656805
	
	I1007 13:56:14.083565  808244 fix.go:216] guest clock: 1728309374.050656805
	I1007 13:56:14.083574  808244 fix.go:229] Guest: 2024-10-07 13:56:14.050656805 +0000 UTC Remote: 2024-10-07 13:56:13.964855661 +0000 UTC m=+18.436642085 (delta=85.801144ms)
	I1007 13:56:14.083746  808244 fix.go:200] guest clock delta is within tolerance: 85.801144ms
	I1007 13:56:14.083756  808244 start.go:83] releasing machines lock for "newest-cni-006310", held for 18.395570899s
	I1007 13:56:14.083790  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:14.084165  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetIP
	I1007 13:56:14.087237  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:14.087621  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:14.087652  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:14.087903  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:14.088493  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:14.088681  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:14.088815  808244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:56:14.088884  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:14.088909  808244 ssh_runner.go:195] Run: cat /version.json
	I1007 13:56:14.088935  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:14.091560  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:14.091637  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:14.092048  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:14.092085  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:14.092105  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:14.092122  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:14.092246  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:14.092354  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:14.092456  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:14.092556  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:14.092636  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:14.092751  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:14.092785  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:14.092888  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:14.209387  808244 ssh_runner.go:195] Run: systemctl --version
	I1007 13:56:14.215744  808244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:56:14.364569  808244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:56:14.371027  808244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:56:14.371132  808244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:56:14.390314  808244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:56:14.390348  808244 start.go:495] detecting cgroup driver to use...
	I1007 13:56:14.390451  808244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:56:14.409974  808244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:56:14.433423  808244 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:56:14.433527  808244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:56:14.452024  808244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:56:14.469487  808244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:56:14.612521  808244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:56:14.798411  808244 docker.go:233] disabling docker service ...
	I1007 13:56:14.798490  808244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:56:14.814419  808244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:56:14.828700  808244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:56:14.962668  808244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:56:15.083311  808244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:56:15.100158  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:56:15.119352  808244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:56:15.119427  808244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.130690  808244 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:56:15.130763  808244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.141887  808244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.154150  808244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.164907  808244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:56:15.178614  808244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.190820  808244 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.208663  808244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:56:15.219939  808244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:56:15.229897  808244 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:56:15.229956  808244 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:56:15.243077  808244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:56:15.256005  808244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:56:15.385179  808244 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:56:15.491136  808244 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:56:15.491242  808244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:56:15.496611  808244 start.go:563] Will wait 60s for crictl version
	I1007 13:56:15.496674  808244 ssh_runner.go:195] Run: which crictl
	I1007 13:56:15.501273  808244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:56:15.546888  808244 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:56:15.546994  808244 ssh_runner.go:195] Run: crio --version
	I1007 13:56:15.577399  808244 ssh_runner.go:195] Run: crio --version
	I1007 13:56:15.610786  808244 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:56:15.612167  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetIP
	I1007 13:56:15.615110  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:15.615476  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:15.615502  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:15.615742  808244 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 13:56:15.620255  808244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:56:15.635431  808244 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1007 13:56:14.880947  807823 pod_ready.go:93] pod "etcd-auto-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:56:14.880973  807823 pod_ready.go:82] duration metric: took 1.508088321s for pod "etcd-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.880984  807823 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.886769  807823 pod_ready.go:93] pod "kube-apiserver-auto-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:56:14.886800  807823 pod_ready.go:82] duration metric: took 5.808029ms for pod "kube-apiserver-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.886815  807823 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.891758  807823 pod_ready.go:93] pod "kube-controller-manager-auto-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:56:14.891783  807823 pod_ready.go:82] duration metric: took 4.959595ms for pod "kube-controller-manager-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.891796  807823 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7grff" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.897128  807823 pod_ready.go:93] pod "kube-proxy-7grff" in "kube-system" namespace has status "Ready":"True"
	I1007 13:56:14.897154  807823 pod_ready.go:82] duration metric: took 5.349734ms for pod "kube-proxy-7grff" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:14.897170  807823 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:15.905082  807823 pod_ready.go:93] pod "kube-scheduler-auto-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:56:15.905115  807823 pod_ready.go:82] duration metric: took 1.007935864s for pod "kube-scheduler-auto-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:56:15.905128  807823 pod_ready.go:39] duration metric: took 6.556303946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:56:15.905152  807823 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:56:15.905232  807823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:56:15.929135  807823 api_server.go:72] duration metric: took 7.365184112s to wait for apiserver process to appear ...
	I1007 13:56:15.929167  807823 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:56:15.929195  807823 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1007 13:56:15.934483  807823 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I1007 13:56:15.935825  807823 api_server.go:141] control plane version: v1.31.1
	I1007 13:56:15.935856  807823 api_server.go:131] duration metric: took 6.681058ms to wait for apiserver health ...
	I1007 13:56:15.935866  807823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:56:15.976078  807823 system_pods.go:59] 7 kube-system pods found
	I1007 13:56:15.976143  807823 system_pods.go:61] "coredns-7c65d6cfc9-xhzgj" [8b2344bb-fdf9-4bbd-839a-6d57ce1c1ed1] Running
	I1007 13:56:15.976154  807823 system_pods.go:61] "etcd-auto-221184" [47df222a-b51c-4b98-8872-64b95af76679] Running
	I1007 13:56:15.976160  807823 system_pods.go:61] "kube-apiserver-auto-221184" [8dfd2a46-4ac9-4f49-bfdb-19bead855b6e] Running
	I1007 13:56:15.976167  807823 system_pods.go:61] "kube-controller-manager-auto-221184" [8cd4bc55-cf0a-4acc-babd-b8440eeaeaaf] Running
	I1007 13:56:15.976172  807823 system_pods.go:61] "kube-proxy-7grff" [7755f134-5d77-4e2a-8d85-4689c6aed28c] Running
	I1007 13:56:15.976177  807823 system_pods.go:61] "kube-scheduler-auto-221184" [5205d5c7-6662-4484-b980-a4fdb75946b2] Running
	I1007 13:56:15.976182  807823 system_pods.go:61] "storage-provisioner" [8baff118-10f7-47f5-bcdb-7ec26bb76a37] Running
	I1007 13:56:15.976199  807823 system_pods.go:74] duration metric: took 40.324254ms to wait for pod list to return data ...
	I1007 13:56:15.976211  807823 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:56:16.170836  807823 default_sa.go:45] found service account: "default"
	I1007 13:56:16.170869  807823 default_sa.go:55] duration metric: took 194.650561ms for default service account to be created ...
	I1007 13:56:16.170883  807823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:56:16.373715  807823 system_pods.go:86] 7 kube-system pods found
	I1007 13:56:16.373748  807823 system_pods.go:89] "coredns-7c65d6cfc9-xhzgj" [8b2344bb-fdf9-4bbd-839a-6d57ce1c1ed1] Running
	I1007 13:56:16.373754  807823 system_pods.go:89] "etcd-auto-221184" [47df222a-b51c-4b98-8872-64b95af76679] Running
	I1007 13:56:16.373758  807823 system_pods.go:89] "kube-apiserver-auto-221184" [8dfd2a46-4ac9-4f49-bfdb-19bead855b6e] Running
	I1007 13:56:16.373762  807823 system_pods.go:89] "kube-controller-manager-auto-221184" [8cd4bc55-cf0a-4acc-babd-b8440eeaeaaf] Running
	I1007 13:56:16.373767  807823 system_pods.go:89] "kube-proxy-7grff" [7755f134-5d77-4e2a-8d85-4689c6aed28c] Running
	I1007 13:56:16.373772  807823 system_pods.go:89] "kube-scheduler-auto-221184" [5205d5c7-6662-4484-b980-a4fdb75946b2] Running
	I1007 13:56:16.373777  807823 system_pods.go:89] "storage-provisioner" [8baff118-10f7-47f5-bcdb-7ec26bb76a37] Running
	I1007 13:56:16.373785  807823 system_pods.go:126] duration metric: took 202.89528ms to wait for k8s-apps to be running ...
	I1007 13:56:16.373796  807823 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:56:16.373855  807823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:56:16.395597  807823 system_svc.go:56] duration metric: took 21.786702ms WaitForService to wait for kubelet
	I1007 13:56:16.395634  807823 kubeadm.go:582] duration metric: took 7.831691791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:56:16.395660  807823 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:56:16.572076  807823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:56:16.572110  807823 node_conditions.go:123] node cpu capacity is 2
	I1007 13:56:16.572124  807823 node_conditions.go:105] duration metric: took 176.458003ms to run NodePressure ...
	I1007 13:56:16.572141  807823 start.go:241] waiting for startup goroutines ...
	I1007 13:56:16.572152  807823 start.go:246] waiting for cluster config update ...
	I1007 13:56:16.572163  807823 start.go:255] writing updated cluster config ...
	I1007 13:56:16.572496  807823 ssh_runner.go:195] Run: rm -f paused
	I1007 13:56:16.653385  807823 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:56:16.656806  807823 out.go:177] * Done! kubectl is now configured to use "auto-221184" cluster and "default" namespace by default
	I1007 13:56:15.636671  808244 kubeadm.go:883] updating cluster {Name:newest-cni-006310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-006310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:56:15.636833  808244 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:56:15.636917  808244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:56:15.677956  808244 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:56:15.678070  808244 ssh_runner.go:195] Run: which lz4
	I1007 13:56:15.683267  808244 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:56:15.689271  808244 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:56:15.689313  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:56:17.263295  808244 crio.go:462] duration metric: took 1.580060727s to copy over tarball
	I1007 13:56:17.263388  808244 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:56:19.749964  808244 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.486537226s)
	I1007 13:56:19.749999  808244 crio.go:469] duration metric: took 2.486670188s to extract the tarball
	I1007 13:56:19.750008  808244 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:56:19.805596  808244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:56:19.856831  808244 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:56:19.856863  808244 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:56:19.856872  808244 kubeadm.go:934] updating node { 192.168.72.175 8443 v1.31.1 crio true true} ...
	I1007 13:56:19.857010  808244 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-006310 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-006310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:56:19.857102  808244 ssh_runner.go:195] Run: crio config
	I1007 13:56:19.905244  808244 cni.go:84] Creating CNI manager for ""
	I1007 13:56:19.905274  808244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:56:19.905287  808244 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1007 13:56:19.905313  808244 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.175 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006310 NodeName:newest-cni-006310 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:56:19.905498  808244 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-006310"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:56:19.905581  808244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:56:19.917689  808244 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:56:19.917794  808244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:56:19.932614  808244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1007 13:56:19.952039  808244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:56:19.973981  808244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1007 13:56:20.008778  808244 ssh_runner.go:195] Run: grep 192.168.72.175	control-plane.minikube.internal$ /etc/hosts
	I1007 13:56:20.015007  808244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:56:20.033132  808244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:56:20.179373  808244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:56:20.199306  808244 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310 for IP: 192.168.72.175
	I1007 13:56:20.199336  808244 certs.go:194] generating shared ca certs ...
	I1007 13:56:20.199359  808244 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:20.199576  808244 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:56:20.199640  808244 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:56:20.199655  808244 certs.go:256] generating profile certs ...
	I1007 13:56:20.199783  808244 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/client.key
	I1007 13:56:20.199946  808244 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/apiserver.key.59472dcc
	I1007 13:56:20.200015  808244 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/proxy-client.key
	I1007 13:56:20.200175  808244 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:56:20.200223  808244 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:56:20.200234  808244 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:56:20.200267  808244 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:56:20.200299  808244 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:56:20.200333  808244 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:56:20.200386  808244 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:56:20.201426  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:56:20.253590  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:56:20.284858  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:56:20.335699  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:56:20.372555  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 13:56:20.418489  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:56:20.455108  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:56:20.485690  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:56:20.521147  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:56:20.552094  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:56:20.580428  808244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:56:20.608616  808244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:56:20.628813  808244 ssh_runner.go:195] Run: openssl version
	I1007 13:56:20.637429  808244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:56:20.650980  808244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:56:20.656095  808244 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:56:20.656183  808244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:56:20.662715  808244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:56:20.676709  808244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:56:20.690757  808244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:56:20.696297  808244 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:56:20.696373  808244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:56:20.703392  808244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:56:20.716854  808244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:56:20.731167  808244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:56:20.736427  808244 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:56:20.736493  808244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:56:20.743282  808244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:56:20.758194  808244 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:56:20.764053  808244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:56:20.771854  808244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:56:20.778813  808244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:56:20.786103  808244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:56:20.793554  808244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:56:20.800676  808244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:56:20.808081  808244 kubeadm.go:392] StartCluster: {Name:newest-cni-006310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-006310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:56:20.808180  808244 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:56:20.808247  808244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:56:20.864446  808244 cri.go:89] found id: ""
	I1007 13:56:20.864519  808244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:56:20.876280  808244 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:56:20.876304  808244 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:56:20.876367  808244 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:56:20.888010  808244 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:56:20.889016  808244 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-006310" does not appear in /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:56:20.889647  808244 kubeconfig.go:62] /home/jenkins/minikube-integration/18424-747025/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-006310" cluster setting kubeconfig missing "newest-cni-006310" context setting]
	I1007 13:56:20.890608  808244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:20.892532  808244 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:56:20.905411  808244 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.175
	I1007 13:56:20.905464  808244 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:56:20.905482  808244 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:56:20.905543  808244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:56:20.946342  808244 cri.go:89] found id: ""
	I1007 13:56:20.946453  808244 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:56:20.967097  808244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:56:20.979636  808244 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:56:20.979676  808244 kubeadm.go:157] found existing configuration files:
	
	I1007 13:56:20.979745  808244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:56:20.991493  808244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:56:20.991566  808244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:56:21.003923  808244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:56:21.015177  808244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:56:21.015247  808244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:56:21.026856  808244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:56:21.039071  808244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:56:21.039148  808244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:56:21.050647  808244 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:56:21.062351  808244 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:56:21.062441  808244 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:56:21.074418  808244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:56:21.087830  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:56:21.217589  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:56:22.426452  808244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.208818645s)
	I1007 13:56:22.426490  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:56:22.731638  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:56:22.826620  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:56:22.924113  808244 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:56:22.924239  808244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:56:23.424392  808244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:56:23.925146  808244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:56:23.989789  808244 api_server.go:72] duration metric: took 1.065694117s to wait for apiserver process to appear ...
	I1007 13:56:23.989820  808244 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:56:23.989846  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:23.990500  808244 api_server.go:269] stopped: https://192.168.72.175:8443/healthz: Get "https://192.168.72.175:8443/healthz": dial tcp 192.168.72.175:8443: connect: connection refused
	I1007 13:56:24.490108  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:26.795216  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:56:26.795253  808244 api_server.go:103] status: https://192.168.72.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:56:26.795273  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:26.855525  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:56:26.855565  808244 api_server.go:103] status: https://192.168.72.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:56:26.990942  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:27.045904  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:56:27.045953  808244 api_server.go:103] status: https://192.168.72.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:56:27.490588  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:27.495805  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:56:27.495845  808244 api_server.go:103] status: https://192.168.72.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:56:27.990154  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:27.996752  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:56:27.996784  808244 api_server.go:103] status: https://192.168.72.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:56:28.490128  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:28.494838  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 200:
	ok
	I1007 13:56:28.501894  808244 api_server.go:141] control plane version: v1.31.1
	I1007 13:56:28.501935  808244 api_server.go:131] duration metric: took 4.512106893s to wait for apiserver health ...
	I1007 13:56:28.501948  808244 cni.go:84] Creating CNI manager for ""
	I1007 13:56:28.501957  808244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:56:28.504192  808244 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:56:28.505995  808244 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:56:28.518004  808244 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:56:28.538440  808244 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:56:28.550572  808244 system_pods.go:59] 8 kube-system pods found
	I1007 13:56:28.550633  808244 system_pods.go:61] "coredns-7c65d6cfc9-qzk98" [4b444fb1-f5a9-4cda-a807-40ca4c6f11c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:56:28.550650  808244 system_pods.go:61] "etcd-newest-cni-006310" [afa669c6-664b-4205-9562-6ba92e9727d2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:56:28.550663  808244 system_pods.go:61] "kube-apiserver-newest-cni-006310" [11fb8c51-0262-4606-9960-8609e81e3ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:56:28.550675  808244 system_pods.go:61] "kube-controller-manager-newest-cni-006310" [7cf05274-b35f-4582-92b7-1edde5313d7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:56:28.550687  808244 system_pods.go:61] "kube-proxy-t5q4s" [653741cd-a7b9-4fed-b095-107549e50580] Running
	I1007 13:56:28.550701  808244 system_pods.go:61] "kube-scheduler-newest-cni-006310" [543aeea4-7465-4aeb-aa12-2935f0535fd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 13:56:28.550715  808244 system_pods.go:61] "metrics-server-6867b74b74-56xqv" [8095c4ad-38d1-4a06-b74a-5df79b7bd32a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:56:28.550726  808244 system_pods.go:61] "storage-provisioner" [2f884334-93e8-4d7b-bb44-6968e96f9ba6] Running
	I1007 13:56:28.550738  808244 system_pods.go:74] duration metric: took 12.203328ms to wait for pod list to return data ...
	I1007 13:56:28.550752  808244 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:56:28.555002  808244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:56:28.555051  808244 node_conditions.go:123] node cpu capacity is 2
	I1007 13:56:28.555065  808244 node_conditions.go:105] duration metric: took 4.30423ms to run NodePressure ...
	I1007 13:56:28.555091  808244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:56:28.832960  808244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:56:28.851627  808244 ops.go:34] apiserver oom_adj: -16
	I1007 13:56:28.851657  808244 kubeadm.go:597] duration metric: took 7.975341754s to restartPrimaryControlPlane
	I1007 13:56:28.851669  808244 kubeadm.go:394] duration metric: took 8.043602696s to StartCluster
	I1007 13:56:28.851689  808244 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:28.851782  808244 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:56:28.853578  808244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:56:28.853900  808244 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:56:28.853982  808244 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:56:28.854151  808244 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-006310"
	I1007 13:56:28.854198  808244 addons.go:69] Setting dashboard=true in profile "newest-cni-006310"
	I1007 13:56:28.854212  808244 addons.go:69] Setting metrics-server=true in profile "newest-cni-006310"
	I1007 13:56:28.854235  808244 config.go:182] Loaded profile config "newest-cni-006310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:56:28.854247  808244 addons.go:234] Setting addon metrics-server=true in "newest-cni-006310"
	W1007 13:56:28.854264  808244 addons.go:243] addon metrics-server should already be in state true
	I1007 13:56:28.854309  808244 host.go:66] Checking if "newest-cni-006310" exists ...
	I1007 13:56:28.854212  808244 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-006310"
	W1007 13:56:28.854360  808244 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:56:28.854406  808244 host.go:66] Checking if "newest-cni-006310" exists ...
	I1007 13:56:28.854230  808244 addons.go:234] Setting addon dashboard=true in "newest-cni-006310"
	W1007 13:56:28.854447  808244 addons.go:243] addon dashboard should already be in state true
	I1007 13:56:28.854488  808244 host.go:66] Checking if "newest-cni-006310" exists ...
	I1007 13:56:28.854174  808244 addons.go:69] Setting default-storageclass=true in profile "newest-cni-006310"
	I1007 13:56:28.854713  808244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006310"
	I1007 13:56:28.854753  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.854785  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.854811  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.854841  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.854956  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.855010  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.855332  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.855405  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.856153  808244 out.go:177] * Verifying Kubernetes components...
	I1007 13:56:28.858457  808244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:56:28.877684  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
	I1007 13:56:28.877893  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44565
	I1007 13:56:28.878400  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.878536  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.879917  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42271
	I1007 13:56:28.880139  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.880141  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.880158  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.880177  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.880619  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.880647  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I1007 13:56:28.880626  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.880706  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.881137  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetState
	I1007 13:56:28.881293  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.881312  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.881379  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.881518  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.881568  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.881820  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.882412  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.882435  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.882708  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.882763  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.883058  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.884363  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.884446  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.887224  808244 addons.go:234] Setting addon default-storageclass=true in "newest-cni-006310"
	W1007 13:56:28.887252  808244 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:56:28.887291  808244 host.go:66] Checking if "newest-cni-006310" exists ...
	I1007 13:56:28.887687  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.887743  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.903875  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1007 13:56:28.905094  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I1007 13:56:28.906407  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I1007 13:56:28.914811  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.914837  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.914895  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.915364  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.915382  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.915489  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.915505  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.915524  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.915549  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.915763  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.915819  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.915943  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.916027  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetState
	I1007 13:56:28.916077  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetState
	I1007 13:56:28.916133  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetState
	I1007 13:56:28.918437  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:28.918603  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:28.918721  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:28.921038  808244 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:56:28.921038  808244 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1007 13:56:28.921295  808244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:56:28.923366  808244 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1007 13:56:28.923689  808244 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:56:28.923946  808244 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:56:28.923989  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:28.925086  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1007 13:56:28.925001  808244 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:56:28.925126  808244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:56:28.925153  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:28.925109  808244 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1007 13:56:28.925223  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:28.929883  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.930783  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:28.930825  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.931253  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.931622  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:28.931643  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.931834  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.931898  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:28.932208  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:28.932271  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:28.932368  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:28.932615  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.932673  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:28.932743  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:28.932927  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:28.932976  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:28.933162  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:28.933394  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:28.933601  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:28.933800  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:28.933975  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:28.937259  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I1007 13:56:28.937745  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.938460  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.938483  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.938876  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.939383  808244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:56:28.939429  808244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:56:28.958349  808244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I1007 13:56:28.958884  808244 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:56:28.959524  808244 main.go:141] libmachine: Using API Version  1
	I1007 13:56:28.959547  808244 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:56:28.959975  808244 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:56:28.960193  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetState
	I1007 13:56:28.962499  808244 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:56:28.963037  808244 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:56:28.963064  808244 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:56:28.963192  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHHostname
	I1007 13:56:28.967074  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.967545  808244 main.go:141] libmachine: (newest-cni-006310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:7d:b5", ip: ""} in network mk-newest-cni-006310: {Iface:virbr2 ExpiryTime:2024-10-07 14:56:06 +0000 UTC Type:0 Mac:52:54:00:d7:7d:b5 Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:newest-cni-006310 Clientid:01:52:54:00:d7:7d:b5}
	I1007 13:56:28.967574  808244 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined IP address 192.168.72.175 and MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:56:28.967894  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHPort
	I1007 13:56:28.968117  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHKeyPath
	I1007 13:56:28.968414  808244 main.go:141] libmachine: (newest-cni-006310) Calling .GetSSHUsername
	I1007 13:56:28.968590  808244 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa Username:docker}
	I1007 13:56:29.121053  808244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:56:29.141358  808244 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:56:29.141546  808244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:56:29.166991  808244 api_server.go:72] duration metric: took 313.044511ms to wait for apiserver process to appear ...
	I1007 13:56:29.167036  808244 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:56:29.167085  808244 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	I1007 13:56:29.177915  808244 api_server.go:279] https://192.168.72.175:8443/healthz returned 200:
	ok
	I1007 13:56:29.181101  808244 api_server.go:141] control plane version: v1.31.1
	I1007 13:56:29.181137  808244 api_server.go:131] duration metric: took 14.090881ms to wait for apiserver health ...
	I1007 13:56:29.181150  808244 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:56:29.189448  808244 system_pods.go:59] 8 kube-system pods found
	I1007 13:56:29.189495  808244 system_pods.go:61] "coredns-7c65d6cfc9-qzk98" [4b444fb1-f5a9-4cda-a807-40ca4c6f11c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:56:29.189509  808244 system_pods.go:61] "etcd-newest-cni-006310" [afa669c6-664b-4205-9562-6ba92e9727d2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:56:29.189523  808244 system_pods.go:61] "kube-apiserver-newest-cni-006310" [11fb8c51-0262-4606-9960-8609e81e3ae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:56:29.189535  808244 system_pods.go:61] "kube-controller-manager-newest-cni-006310" [7cf05274-b35f-4582-92b7-1edde5313d7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:56:29.189541  808244 system_pods.go:61] "kube-proxy-t5q4s" [653741cd-a7b9-4fed-b095-107549e50580] Running
	I1007 13:56:29.189552  808244 system_pods.go:61] "kube-scheduler-newest-cni-006310" [543aeea4-7465-4aeb-aa12-2935f0535fd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 13:56:29.189571  808244 system_pods.go:61] "metrics-server-6867b74b74-56xqv" [8095c4ad-38d1-4a06-b74a-5df79b7bd32a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:56:29.189580  808244 system_pods.go:61] "storage-provisioner" [2f884334-93e8-4d7b-bb44-6968e96f9ba6] Running
	I1007 13:56:29.189592  808244 system_pods.go:74] duration metric: took 8.434544ms to wait for pod list to return data ...
	I1007 13:56:29.189604  808244 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:56:29.201971  808244 default_sa.go:45] found service account: "default"
	I1007 13:56:29.202010  808244 default_sa.go:55] duration metric: took 12.394746ms for default service account to be created ...
	I1007 13:56:29.202039  808244 kubeadm.go:582] duration metric: took 348.096116ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 13:56:29.202064  808244 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:56:29.220405  808244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:56:29.220440  808244 node_conditions.go:123] node cpu capacity is 2
	I1007 13:56:29.220454  808244 node_conditions.go:105] duration metric: took 18.383233ms to run NodePressure ...
	I1007 13:56:29.220472  808244 start.go:241] waiting for startup goroutines ...
	I1007 13:56:29.223491  808244 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:56:29.223524  808244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:56:29.278278  808244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:56:29.281866  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1007 13:56:29.281895  808244 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1007 13:56:29.290164  808244 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:56:29.290206  808244 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:56:29.299531  808244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:56:29.368951  808244 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:56:29.368993  808244 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:56:29.369447  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1007 13:56:29.369469  808244 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1007 13:56:29.425412  808244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:56:29.433788  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1007 13:56:29.433823  808244 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1007 13:56:29.498403  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1007 13:56:29.498427  808244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1007 13:56:29.556865  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1007 13:56:29.556903  808244 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1007 13:56:29.708734  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1007 13:56:29.708771  808244 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1007 13:56:29.769194  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1007 13:56:29.769250  808244 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1007 13:56:29.810865  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1007 13:56:29.810899  808244 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1007 13:56:29.848250  808244 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:56:29.848300  808244 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1007 13:56:29.880487  808244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:56:30.891146  808244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.591563227s)
	I1007 13:56:30.891224  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:30.891236  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:30.891370  808244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61304487s)
	I1007 13:56:30.891412  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:30.891427  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:30.891643  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:30.891681  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Closing plugin on server side
	I1007 13:56:30.891721  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:30.891763  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:30.891799  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:30.891810  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:30.891764  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:30.891878  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:30.891886  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:30.892183  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Closing plugin on server side
	I1007 13:56:30.892199  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Closing plugin on server side
	I1007 13:56:30.892229  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:30.892249  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:30.892255  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:30.892328  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:30.900452  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:30.900480  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:30.900864  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:30.900884  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:31.043815  808244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.618345623s)
	I1007 13:56:31.043886  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:31.043903  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:31.044237  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:31.044259  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:31.044269  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:31.044277  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:31.044633  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Closing plugin on server side
	I1007 13:56:31.044646  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:31.044661  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:31.044674  808244 addons.go:475] Verifying addon metrics-server=true in "newest-cni-006310"
	I1007 13:56:31.681261  808244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.80071499s)
	I1007 13:56:31.681334  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:31.681350  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:31.681723  808244 main.go:141] libmachine: (newest-cni-006310) DBG | Closing plugin on server side
	I1007 13:56:31.681759  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:31.681772  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:31.681782  808244 main.go:141] libmachine: Making call to close driver server
	I1007 13:56:31.681791  808244 main.go:141] libmachine: (newest-cni-006310) Calling .Close
	I1007 13:56:31.682064  808244 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:56:31.682079  808244 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:56:31.683671  808244 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-006310 addons enable metrics-server
	
	I1007 13:56:31.685174  808244 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1007 13:56:31.686500  808244 addons.go:510] duration metric: took 2.832539202s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1007 13:56:31.686543  808244 start.go:246] waiting for cluster config update ...
	I1007 13:56:31.686561  808244 start.go:255] writing updated cluster config ...
	I1007 13:56:31.686804  808244 ssh_runner.go:195] Run: rm -f paused
	I1007 13:56:31.746827  808244 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:56:31.748824  808244 out.go:177] * Done! kubectl is now configured to use "newest-cni-006310" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.717653792Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:499bff6ea03b5de03de76c2b055562ee2cd81c81d49a31bc3389d222e682b1ec,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-xwpbg,Uid:0f8c5895-ed84-4e2f-be7a-ed5858f47ce6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308406103013757,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-xwpbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8c5895-ed84-4e2f-be7a-ed5858f47ce6,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:40:05.792357656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e0396d2d-9740-4e17-868b-041d948a6eff,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308405647846044,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T13:40:05.338967813Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-hrbbb,Uid:c5a49453-f8c8-44d1-bbca-2b7472bf504b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308404270178577,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:40:03.960858619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6vfj,Uid:fe2f90d1-9c6f-4ada
-996d-fc63bb7baffe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308404234099560,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2f90d1-9c6f-4ada-996d-fc63bb7baffe,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:40:03.921745163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&PodSandboxMetadata{Name:kube-proxy-z9r92,Uid:762b87c9-62ad-4bca-8135-77649d0a453a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308403982963246,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:40:03.671439412Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-653322,Uid:744c73aba4cc09cd313a7a99be824a05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308393175289175,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 744c73aba4cc09cd313a7a99be824a05,kubernetes.io/config.seen: 2024-10-07T13:39:52.711770986Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&PodSandboxM
etadata{Name:kube-apiserver-embed-certs-653322,Uid:7f8a5f41dd6f64ecdbe7aad4c8311dba,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728308393171106496,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.36:8443,kubernetes.io/config.hash: 7f8a5f41dd6f64ecdbe7aad4c8311dba,kubernetes.io/config.seen: 2024-10-07T13:39:52.711769455Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-653322,Uid:902e4bb387c68294772acfd12f69c4d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308393163393638,Labels:map[string]string{component: etcd,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.36:2379,kubernetes.io/config.hash: 902e4bb387c68294772acfd12f69c4d9,kubernetes.io/config.seen: 2024-10-07T13:39:52.711765742Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-653322,Uid:cfc526eb0802c4ac41e87e2c050c7b36,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308393150624882,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,tier: control-plane,},Annotations:map[strin
g]string{kubernetes.io/config.hash: cfc526eb0802c4ac41e87e2c050c7b36,kubernetes.io/config.seen: 2024-10-07T13:39:52.711771966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d9c8b8dc-e5cc-4fe0-aefb-ed1ad8ba04d5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.718361421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f193a6f3-f078-432e-9044-d3723713dd99 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.718815853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f193a6f3-f078-432e-9044-d3723713dd99 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.719463967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f193a6f3-f078-432e-9044-d3723713dd99 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.743329698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d898b43-ba7f-4669-a937-448ae6f07cdf name=/runtime.v1.RuntimeService/Version
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.743728617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d898b43-ba7f-4669-a937-448ae6f07cdf name=/runtime.v1.RuntimeService/Version
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.745059599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e0489b0-c22a-4442-a047-1220c5f8e71e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.745500869Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309394745473881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e0489b0-c22a-4442-a047-1220c5f8e71e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.747892093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e5d9a96-b0b9-45e1-b600-ada209081d53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.747964531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e5d9a96-b0b9-45e1-b600-ada209081d53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.748210703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e5d9a96-b0b9-45e1-b600-ada209081d53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.796102194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efd62ba4-acd8-4f88-ac0f-55e6f8a3789e name=/runtime.v1.RuntimeService/Version
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.796502783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efd62ba4-acd8-4f88-ac0f-55e6f8a3789e name=/runtime.v1.RuntimeService/Version
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.797764822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af65d04e-a535-436f-8bc9-0c10d0536ffe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.798232905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309394798206596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af65d04e-a535-436f-8bc9-0c10d0536ffe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.798815392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3dbf583-85a3-4958-aa0b-1ab0140ab3a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.798869514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3dbf583-85a3-4958-aa0b-1ab0140ab3a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.799137400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3dbf583-85a3-4958-aa0b-1ab0140ab3a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.836128036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2bff8b4-ed6b-4610-9f32-3707fd95b569 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.836226891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2bff8b4-ed6b-4610-9f32-3707fd95b569 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.839273609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5187588a-1acf-4b22-a214-136b4f53f952 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.839969802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309394839932089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5187588a-1acf-4b22-a214-136b4f53f952 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.840832348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ae45c83-d09a-41b0-9049-21b7404cf67c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.840902345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ae45c83-d09a-41b0-9049-21b7404cf67c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:56:34 embed-certs-653322 crio[717]: time="2024-10-07 13:56:34.841136171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91,PodSandboxId:77414d1be78673df7f65e4ffb441c563044c9f0c60a25f99131d677b39f726c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308405913881736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0396d2d-9740-4e17-868b-041d948a6eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720,PodSandboxId:ed8079b4091f31108cb282af53af4e4b7f2a366d9aff9ea682a4189c94b11146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405415320504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hrbbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5a49453-f8c8-44d1-bbca-2b7472bf504b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4,PodSandboxId:805bb6668884db94aadef31a2358b7c46d50b18d9b3fd168b588d8ef28256979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308405223932134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l6vfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
e2f90d1-9c6f-4ada-996d-fc63bb7baffe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a,PodSandboxId:8654b316558d4fff44e7851da911bdc714d142ebe6234529df9d763a411f130c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728308404305848757,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9r92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 762b87c9-62ad-4bca-8135-77649d0a453a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba,PodSandboxId:c73b61ac974dcd2ea22f6f7d6a393754bba38bbd6b8e3ad6eaea130a83a26bba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308393423282744
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 744c73aba4cc09cd313a7a99be824a05,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20,PodSandboxId:f0f0577dcaa4aedfb19f03ec95130a4773e1b208d09a02b8e02587f79cb8f0cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308393394496980,Lab
els:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 902e4bb387c68294772acfd12f69c4d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7,PodSandboxId:b72035e7d994d2c9f2efd0d93baa18ee2fd99e3a06a575482da90e1e6218daf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308393349055094,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc526eb0802c4ac41e87e2c050c7b36,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4,PodSandboxId:530eb8a8a4a35a8ac2d8336760de97b48a72b14838a10ad55226c0cf6fec21f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308393375176773,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95,PodSandboxId:b6f94a2563f838930e36849a6d8ee11d0a1291fe890f38d87f18fee03588dd80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308103883448090,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-653322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8a5f41dd6f64ecdbe7aad4c8311dba,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ae45c83-d09a-41b0-9049-21b7404cf67c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be7d2d18111c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   77414d1be7867       storage-provisioner
	185ad082fff4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   ed8079b4091f3       coredns-7c65d6cfc9-hrbbb
	ca57da92b1670       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   805bb6668884d       coredns-7c65d6cfc9-l6vfj
	c31cc1272200e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   8654b316558d4       kube-proxy-z9r92
	11d29174badb8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   c73b61ac974dc       kube-controller-manager-embed-certs-653322
	cd446f798df71       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   f0f0577dcaa4a       etcd-embed-certs-653322
	380f59263feb5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   530eb8a8a4a35       kube-apiserver-embed-certs-653322
	872a29822cdb8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   b72035e7d994d       kube-scheduler-embed-certs-653322
	d0e1e406683eb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   b6f94a2563f83       kube-apiserver-embed-certs-653322
	
	
	==> coredns [185ad082fff4ddd79be0f5372bb00fa09eda8a2de43d8b080585019e74a9d720] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ca57da92b167043d58e9cb7a7fa59d8099d3c66edc00e7afb21703491f795db4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-653322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-653322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=embed-certs-653322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:39:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-653322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:56:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:55:26 +0000   Mon, 07 Oct 2024 13:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:55:26 +0000   Mon, 07 Oct 2024 13:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:55:26 +0000   Mon, 07 Oct 2024 13:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:55:26 +0000   Mon, 07 Oct 2024 13:39:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    embed-certs-653322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e959c63f733947bf8e1b2bfbe717544c
	  System UUID:                e959c63f-7339-47bf-8e1b-2bfbe717544c
	  Boot ID:                    afa69290-cc98-4651-a690-b6a53a47693c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-hrbbb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-l6vfj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-653322                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-653322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-653322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-z9r92                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-653322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-xwpbg               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-653322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-653322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-653322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-653322 event: Registered Node embed-certs-653322 in Controller
	
	
	==> dmesg <==
	[  +4.859726] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.648010] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.402170] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.969218] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.062446] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066469] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.196067] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.162161] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.302732] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[Oct 7 13:35] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
	[  +0.066872] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.941183] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +5.554262] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.053962] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.476708] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 7 13:39] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.322429] systemd-fstab-generator[2601]: Ignoring "noauto" option for root device
	[  +0.063737] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.005585] systemd-fstab-generator[2921]: Ignoring "noauto" option for root device
	[  +0.098926] kauditd_printk_skb: 54 callbacks suppressed
	[Oct 7 13:40] systemd-fstab-generator[3051]: Ignoring "noauto" option for root device
	[  +0.123031] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.086878] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [cd446f798df710fa9dc4dab9867848d15ad2319d5573f3d36d0bcbeb45f3bf20] <==
	{"level":"info","ts":"2024-10-07T13:55:25.869504Z","caller":"traceutil/trace.go:171","msg":"trace[1762796048] linearizableReadLoop","detail":"{readStateIndex:1404; appliedIndex:1404; }","duration":"160.012662ms","start":"2024-10-07T13:55:25.709478Z","end":"2024-10-07T13:55:25.869490Z","steps":["trace[1762796048] 'read index received'  (duration: 160.008534ms)","trace[1762796048] 'applied index is now lower than readState.Index'  (duration: 3.134µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:55:25.869748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.186233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:25.869803Z","caller":"traceutil/trace.go:171","msg":"trace[1991599478] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1202; }","duration":"160.320622ms","start":"2024-10-07T13:55:25.709473Z","end":"2024-10-07T13:55:25.869794Z","steps":["trace[1991599478] 'agreement among raft nodes before linearized reading'  (duration: 160.156762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:26.129415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.870724ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11334876809133505189 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-gvvwmmhsl7nxjbz644f5om5x2e\" mod_revision:1193 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-gvvwmmhsl7nxjbz644f5om5x2e\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-gvvwmmhsl7nxjbz644f5om5x2e\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-07T13:55:26.129510Z","caller":"traceutil/trace.go:171","msg":"trace[288929240] linearizableReadLoop","detail":"{readStateIndex:1405; appliedIndex:1404; }","duration":"257.534187ms","start":"2024-10-07T13:55:25.871964Z","end":"2024-10-07T13:55:26.129498Z","steps":["trace[288929240] 'read index received'  (duration: 129.136759ms)","trace[288929240] 'applied index is now lower than readState.Index'  (duration: 128.39639ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:55:26.129759Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.785873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-07T13:55:26.129853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.061876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:26.130439Z","caller":"traceutil/trace.go:171","msg":"trace[1304468489] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1203; }","duration":"255.647716ms","start":"2024-10-07T13:55:25.874779Z","end":"2024-10-07T13:55:26.130427Z","steps":["trace[1304468489] 'agreement among raft nodes before linearized reading'  (duration: 255.040309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:26.129891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.169587ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:26.130749Z","caller":"traceutil/trace.go:171","msg":"trace[849762031] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1203; }","duration":"151.024815ms","start":"2024-10-07T13:55:25.979714Z","end":"2024-10-07T13:55:26.130738Z","steps":["trace[849762031] 'agreement among raft nodes before linearized reading'  (duration: 150.166279ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:55:26.130089Z","caller":"traceutil/trace.go:171","msg":"trace[511020313] transaction","detail":"{read_only:false; response_revision:1203; number_of_response:1; }","duration":"376.336097ms","start":"2024-10-07T13:55:25.753741Z","end":"2024-10-07T13:55:26.130077Z","steps":["trace[511020313] 'process raft request'  (duration: 247.413409ms)","trace[511020313] 'compare'  (duration: 127.546822ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:55:26.131636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:55:25.753717Z","time spent":"377.769564ms","remote":"127.0.0.1:44686","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-gvvwmmhsl7nxjbz644f5om5x2e\" mod_revision:1193 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-gvvwmmhsl7nxjbz644f5om5x2e\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-gvvwmmhsl7nxjbz644f5om5x2e\" > >"}
	{"level":"info","ts":"2024-10-07T13:55:26.131336Z","caller":"traceutil/trace.go:171","msg":"trace[251696444] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1203; }","duration":"258.341724ms","start":"2024-10-07T13:55:25.871959Z","end":"2024-10-07T13:55:26.130301Z","steps":["trace[251696444] 'agreement among raft nodes before linearized reading'  (duration: 257.762366ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:55:26.758653Z","caller":"traceutil/trace.go:171","msg":"trace[485421437] transaction","detail":"{read_only:false; response_revision:1204; number_of_response:1; }","duration":"509.287577ms","start":"2024-10-07T13:55:26.249347Z","end":"2024-10-07T13:55:26.758634Z","steps":["trace[485421437] 'process raft request'  (duration: 509.022446ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:26.758953Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:55:26.249326Z","time spent":"509.540986ms","remote":"127.0.0.1:44610","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5695,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/embed-certs-653322\" mod_revision:957 > success:<request_put:<key:\"/registry/minions/embed-certs-653322\" value_size:5651 >> failure:<request_range:<key:\"/registry/minions/embed-certs-653322\" > >"}
	{"level":"info","ts":"2024-10-07T13:55:26.855483Z","caller":"traceutil/trace.go:171","msg":"trace[503461401] linearizableReadLoop","detail":"{readStateIndex:1407; appliedIndex:1406; }","duration":"145.114991ms","start":"2024-10-07T13:55:26.710345Z","end":"2024-10-07T13:55:26.855460Z","steps":["trace[503461401] 'read index received'  (duration: 48.972218ms)","trace[503461401] 'applied index is now lower than readState.Index'  (duration: 96.142026ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:55:26.855717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.392992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:26.855811Z","caller":"traceutil/trace.go:171","msg":"trace[1633837631] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1205; }","duration":"145.499825ms","start":"2024-10-07T13:55:26.710302Z","end":"2024-10-07T13:55:26.855802Z","steps":["trace[1633837631] 'agreement among raft nodes before linearized reading'  (duration: 145.374875ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:55:26.855735Z","caller":"traceutil/trace.go:171","msg":"trace[874750359] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"301.194493ms","start":"2024-10-07T13:55:26.554520Z","end":"2024-10-07T13:55:26.855714Z","steps":["trace[874750359] 'process raft request'  (duration: 300.814245ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:26.856213Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:55:26.554500Z","time spent":"301.603653ms","remote":"127.0.0.1:44686","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-653322\" mod_revision:1195 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-653322\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-653322\" > >"}
	{"level":"warn","ts":"2024-10-07T13:55:27.114016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.116669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:27.114693Z","caller":"traceutil/trace.go:171","msg":"trace[1996085054] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1205; }","duration":"239.79635ms","start":"2024-10-07T13:55:26.874879Z","end":"2024-10-07T13:55:27.114675Z","steps":["trace[1996085054] 'range keys from in-memory index tree'  (duration: 239.040491ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:27.114016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.042879ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:55:27.114936Z","caller":"traceutil/trace.go:171","msg":"trace[1786444030] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1205; }","duration":"133.968862ms","start":"2024-10-07T13:55:26.980952Z","end":"2024-10-07T13:55:27.114921Z","steps":["trace[1786444030] 'range keys from in-memory index tree'  (duration: 132.956882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:55:53.255635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.140841ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11334876809133505351 > lease_revoke:<id:1d4d926735373aef>","response":"size:28"}
	
	
	==> kernel <==
	 13:56:35 up 21 min,  0 users,  load average: 0.56, 0.23, 0.19
	Linux embed-certs-653322 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [380f59263feb55e4a63127749a75ef2b8ac617ca0b34839aa8353228f14ffda4] <==
	I1007 13:52:56.992924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:52:56.994604       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:54:55.992770       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:55.992939       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1007 13:54:56.995673       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:56.995741       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1007 13:54:56.995875       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:56.995987       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:54:56.996942       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:54:56.997029       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:55:56.997923       1 handler_proxy.go:99] no RequestInfo found in the context
	W1007 13:55:56.997948       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:55:56.998335       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1007 13:55:56.998355       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:55:56.999507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:55:56.999628       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d0e1e406683eb290dc83651321655c2a21c780ae2dfa5e0c4fef252f4f5b4e95] <==
	W1007 13:39:49.101692       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.311981       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.442204       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.470247       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.474885       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.612059       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:49.967993       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.021663       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.023062       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.089089       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.090310       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.119756       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.136940       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.146834       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.148186       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.212750       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.214271       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.219940       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.281200       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.300075       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.389841       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.508469       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.547486       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.560336       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:39:50.560841       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [11d29174badb881e1e81ae86300678431fff54d5d0d6e7bcdec2762ee4b2c6ba] <==
	E1007 13:51:33.125595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:51:33.588010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:52:03.134183       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:52:03.596372       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:52:33.141082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:52:33.605480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:53:03.147009       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:53:03.614459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:53:33.154130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:53:33.622705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:54:03.161763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:54:03.631993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:54:33.168861       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:54:33.640073       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:55:03.175269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:55:03.648659       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:55:26.762408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-653322"
	E1007 13:55:33.182137       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:55:33.659874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:56:01.739843       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="320.643µs"
	E1007 13:56:03.190438       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:56:03.672722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:56:15.736941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="111.255µs"
	E1007 13:56:33.198275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:56:33.685810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c31cc1272200ed01433d6e8361298c5ba3036a81d1e4da985e5bf7ea812ccb9a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 13:40:04.849759       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 13:40:04.868063       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.36"]
	E1007 13:40:04.868136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:40:05.018052       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 13:40:05.018126       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 13:40:05.018174       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:40:05.036268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:40:05.036618       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:40:05.036637       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:40:05.048130       1 config.go:199] "Starting service config controller"
	I1007 13:40:05.048254       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:40:05.048363       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:40:05.048382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:40:05.055483       1 config.go:328] "Starting node config controller"
	I1007 13:40:05.055499       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:40:05.149110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:40:05.149227       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:40:05.155631       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [872a29822cdb8e7bc41701485cfb72f642e9e0ab436250c8f93c792a871db7c7] <==
	W1007 13:39:56.950130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:39:56.950253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.029070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:39:57.029299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.053166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1007 13:39:57.053855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:39:57.053968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1007 13:39:57.054164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.082123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:39:57.082356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.106425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 13:39:57.106766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.112511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 13:39:57.112858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.145084       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:39:57.145119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.153145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:39:57.153277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.304594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:39:57.304647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.327503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:39:57.327788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:39:57.623069       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:39:57.623344       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 13:40:00.812867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:55:31 embed-certs-653322 kubelet[2928]: E1007 13:55:31.717828    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:55:39 embed-certs-653322 kubelet[2928]: E1007 13:55:39.054269    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309339053833989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:39 embed-certs-653322 kubelet[2928]: E1007 13:55:39.054877    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309339053833989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:46 embed-certs-653322 kubelet[2928]: E1007 13:55:46.734938    2928 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 07 13:55:46 embed-certs-653322 kubelet[2928]: E1007 13:55:46.735264    2928 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 07 13:55:46 embed-certs-653322 kubelet[2928]: E1007 13:55:46.735916    2928 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcv57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-xwpbg_kube-system(0f8c5895-ed84-4e2f-be7a-ed5858f47ce6): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 07 13:55:46 embed-certs-653322 kubelet[2928]: E1007 13:55:46.737280    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:55:49 embed-certs-653322 kubelet[2928]: E1007 13:55:49.057056    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309349056476513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:49 embed-certs-653322 kubelet[2928]: E1007 13:55:49.057137    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309349056476513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:58 embed-certs-653322 kubelet[2928]: E1007 13:55:58.759097    2928 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 13:55:58 embed-certs-653322 kubelet[2928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 13:55:58 embed-certs-653322 kubelet[2928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 13:55:58 embed-certs-653322 kubelet[2928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 13:55:58 embed-certs-653322 kubelet[2928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 13:55:59 embed-certs-653322 kubelet[2928]: E1007 13:55:59.059976    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309359059613329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:59 embed-certs-653322 kubelet[2928]: E1007 13:55:59.060047    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309359059613329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:01 embed-certs-653322 kubelet[2928]: E1007 13:56:01.717762    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:56:09 embed-certs-653322 kubelet[2928]: E1007 13:56:09.061621    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309369061172864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:09 embed-certs-653322 kubelet[2928]: E1007 13:56:09.061705    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309369061172864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:15 embed-certs-653322 kubelet[2928]: E1007 13:56:15.717654    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:56:19 embed-certs-653322 kubelet[2928]: E1007 13:56:19.064194    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309379063776820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:19 embed-certs-653322 kubelet[2928]: E1007 13:56:19.064792    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309379063776820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:27 embed-certs-653322 kubelet[2928]: E1007 13:56:27.717476    2928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xwpbg" podUID="0f8c5895-ed84-4e2f-be7a-ed5858f47ce6"
	Oct 07 13:56:29 embed-certs-653322 kubelet[2928]: E1007 13:56:29.066865    2928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309389066340196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:56:29 embed-certs-653322 kubelet[2928]: E1007 13:56:29.066906    2928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309389066340196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [be7d2d18111c904fb553b6e9674f1f6bbe4563008052ee902f12485721872c91] <==
	I1007 13:40:06.061988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:40:06.071929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:40:06.071992       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:40:06.084143       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:40:06.084351       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-653322_9c10e6a5-50e4-4984-8a78-8f6539487460!
	I1007 13:40:06.084323       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7e02454-542f-4e93-af4e-1feee42a6375", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-653322_9c10e6a5-50e4-4984-8a78-8f6539487460 became leader
	I1007 13:40:06.185594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-653322_9c10e6a5-50e4-4984-8a78-8f6539487460!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-653322 -n embed-certs-653322
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-653322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xwpbg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-653322 describe pod metrics-server-6867b74b74-xwpbg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-653322 describe pod metrics-server-6867b74b74-xwpbg: exit status 1 (78.126481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xwpbg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-653322 describe pod metrics-server-6867b74b74-xwpbg: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (441.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (290.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016701 -n no-preload-016701
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-07 13:55:15.819349999 +0000 UTC m=+6449.017889989
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-016701 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-016701 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.769µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-016701 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-016701 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-016701 logs -n 25: (1.339639424s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:26 UTC |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-016701             | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-653322            | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-120978        | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC | 07 Oct 24 13:48 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:54 UTC | 07 Oct 24 13:54 UTC |
	| start   | -p newest-cni-006310 --memory=2200 --alsologtostderr   | newest-cni-006310            | jenkins | v1.34.0 | 07 Oct 24 13:54 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:54:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:54:55.644146  807372 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:54:55.644270  807372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:54:55.644279  807372 out.go:358] Setting ErrFile to fd 2...
	I1007 13:54:55.644284  807372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:54:55.644531  807372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:54:55.645182  807372 out.go:352] Setting JSON to false
	I1007 13:54:55.646312  807372 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13045,"bootTime":1728296251,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:54:55.646440  807372 start.go:139] virtualization: kvm guest
	I1007 13:54:55.648875  807372 out.go:177] * [newest-cni-006310] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:54:55.650589  807372 notify.go:220] Checking for updates...
	I1007 13:54:55.650646  807372 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:54:55.652012  807372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:54:55.653285  807372 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:54:55.654512  807372 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:54:55.655716  807372 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:54:55.656811  807372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:54:55.658485  807372 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:54:55.658587  807372 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:54:55.658684  807372 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:54:55.658819  807372 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:54:55.698254  807372 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:54:55.699649  807372 start.go:297] selected driver: kvm2
	I1007 13:54:55.699667  807372 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:54:55.699687  807372 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:54:55.700561  807372 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:54:55.700672  807372 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:54:55.717304  807372 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:54:55.717375  807372 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1007 13:54:55.717475  807372 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1007 13:54:55.717755  807372 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 13:54:55.717788  807372 cni.go:84] Creating CNI manager for ""
	I1007 13:54:55.717843  807372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:54:55.717854  807372 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 13:54:55.717903  807372 start.go:340] cluster config:
	{Name:newest-cni-006310 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-006310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:54:55.718013  807372 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:54:55.719997  807372 out.go:177] * Starting "newest-cni-006310" primary control-plane node in "newest-cni-006310" cluster
	I1007 13:54:55.721842  807372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:54:55.721915  807372 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:54:55.721930  807372 cache.go:56] Caching tarball of preloaded images
	I1007 13:54:55.722075  807372 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:54:55.722092  807372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:54:55.722229  807372 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/config.json ...
	I1007 13:54:55.722255  807372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/newest-cni-006310/config.json: {Name:mkbdc0b7b98338947e2793c225320d1f0e0acb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:54:55.722441  807372 start.go:360] acquireMachinesLock for newest-cni-006310: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:54:55.722482  807372 start.go:364] duration metric: took 23.346µs to acquireMachinesLock for "newest-cni-006310"
	I1007 13:54:55.722504  807372 start.go:93] Provisioning new machine with config: &{Name:newest-cni-006310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-006310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:54:55.722596  807372 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:54:55.724482  807372 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 13:54:55.724672  807372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:54:55.724726  807372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:54:55.741541  807372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1007 13:54:55.742177  807372 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:54:55.742810  807372 main.go:141] libmachine: Using API Version  1
	I1007 13:54:55.742833  807372 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:54:55.743208  807372 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:54:55.743421  807372 main.go:141] libmachine: (newest-cni-006310) Calling .GetMachineName
	I1007 13:54:55.743669  807372 main.go:141] libmachine: (newest-cni-006310) Calling .DriverName
	I1007 13:54:55.743895  807372 start.go:159] libmachine.API.Create for "newest-cni-006310" (driver="kvm2")
	I1007 13:54:55.743930  807372 client.go:168] LocalClient.Create starting
	I1007 13:54:55.743985  807372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 13:54:55.744030  807372 main.go:141] libmachine: Decoding PEM data...
	I1007 13:54:55.744055  807372 main.go:141] libmachine: Parsing certificate...
	I1007 13:54:55.744112  807372 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 13:54:55.744133  807372 main.go:141] libmachine: Decoding PEM data...
	I1007 13:54:55.744141  807372 main.go:141] libmachine: Parsing certificate...
	I1007 13:54:55.744154  807372 main.go:141] libmachine: Running pre-create checks...
	I1007 13:54:55.744160  807372 main.go:141] libmachine: (newest-cni-006310) Calling .PreCreateCheck
	I1007 13:54:55.744573  807372 main.go:141] libmachine: (newest-cni-006310) Calling .GetConfigRaw
	I1007 13:54:55.745055  807372 main.go:141] libmachine: Creating machine...
	I1007 13:54:55.745071  807372 main.go:141] libmachine: (newest-cni-006310) Calling .Create
	I1007 13:54:55.745241  807372 main.go:141] libmachine: (newest-cni-006310) Creating KVM machine...
	I1007 13:54:55.746593  807372 main.go:141] libmachine: (newest-cni-006310) DBG | found existing default KVM network
	I1007 13:54:55.747842  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:55.747661  807395 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:f6:1d} reservation:<nil>}
	I1007 13:54:55.748711  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:55.748608  807395 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:f8:33} reservation:<nil>}
	I1007 13:54:55.749709  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:55.749600  807395 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:40:af} reservation:<nil>}
	I1007 13:54:55.750930  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:55.750833  807395 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00033b630}
	I1007 13:54:55.751000  807372 main.go:141] libmachine: (newest-cni-006310) DBG | created network xml: 
	I1007 13:54:55.751027  807372 main.go:141] libmachine: (newest-cni-006310) DBG | <network>
	I1007 13:54:55.751039  807372 main.go:141] libmachine: (newest-cni-006310) DBG |   <name>mk-newest-cni-006310</name>
	I1007 13:54:55.751048  807372 main.go:141] libmachine: (newest-cni-006310) DBG |   <dns enable='no'/>
	I1007 13:54:55.751091  807372 main.go:141] libmachine: (newest-cni-006310) DBG |   
	I1007 13:54:55.751122  807372 main.go:141] libmachine: (newest-cni-006310) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1007 13:54:55.751151  807372 main.go:141] libmachine: (newest-cni-006310) DBG |     <dhcp>
	I1007 13:54:55.751168  807372 main.go:141] libmachine: (newest-cni-006310) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1007 13:54:55.751177  807372 main.go:141] libmachine: (newest-cni-006310) DBG |     </dhcp>
	I1007 13:54:55.751184  807372 main.go:141] libmachine: (newest-cni-006310) DBG |   </ip>
	I1007 13:54:55.751191  807372 main.go:141] libmachine: (newest-cni-006310) DBG |   
	I1007 13:54:55.751198  807372 main.go:141] libmachine: (newest-cni-006310) DBG | </network>
	I1007 13:54:55.751208  807372 main.go:141] libmachine: (newest-cni-006310) DBG | 
	I1007 13:54:55.757383  807372 main.go:141] libmachine: (newest-cni-006310) DBG | trying to create private KVM network mk-newest-cni-006310 192.168.72.0/24...
	I1007 13:54:55.838797  807372 main.go:141] libmachine: (newest-cni-006310) DBG | private KVM network mk-newest-cni-006310 192.168.72.0/24 created
	I1007 13:54:55.838847  807372 main.go:141] libmachine: (newest-cni-006310) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310 ...
	I1007 13:54:55.838879  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:55.838750  807395 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:54:55.838937  807372 main.go:141] libmachine: (newest-cni-006310) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:54:55.839043  807372 main.go:141] libmachine: (newest-cni-006310) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:54:56.164890  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:56.164703  807395 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/id_rsa...
	I1007 13:54:56.324041  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:56.323858  807395 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/newest-cni-006310.rawdisk...
	I1007 13:54:56.324146  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Writing magic tar header
	I1007 13:54:56.324169  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Writing SSH key tar header
	I1007 13:54:56.324185  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:56.324014  807395 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310 ...
	I1007 13:54:56.324201  807372 main.go:141] libmachine: (newest-cni-006310) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310 (perms=drwx------)
	I1007 13:54:56.324219  807372 main.go:141] libmachine: (newest-cni-006310) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:54:56.324231  807372 main.go:141] libmachine: (newest-cni-006310) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 13:54:56.324247  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310
	I1007 13:54:56.324273  807372 main.go:141] libmachine: (newest-cni-006310) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 13:54:56.324300  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 13:54:56.324314  807372 main.go:141] libmachine: (newest-cni-006310) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:54:56.324328  807372 main.go:141] libmachine: (newest-cni-006310) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:54:56.324340  807372 main.go:141] libmachine: (newest-cni-006310) Creating domain...
	I1007 13:54:56.324359  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:54:56.324369  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 13:54:56.324379  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:54:56.324397  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:54:56.324408  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Checking permissions on dir: /home
	I1007 13:54:56.324416  807372 main.go:141] libmachine: (newest-cni-006310) DBG | Skipping /home - not owner
	I1007 13:54:56.325682  807372 main.go:141] libmachine: (newest-cni-006310) define libvirt domain using xml: 
	I1007 13:54:56.325703  807372 main.go:141] libmachine: (newest-cni-006310) <domain type='kvm'>
	I1007 13:54:56.325713  807372 main.go:141] libmachine: (newest-cni-006310)   <name>newest-cni-006310</name>
	I1007 13:54:56.325722  807372 main.go:141] libmachine: (newest-cni-006310)   <memory unit='MiB'>2200</memory>
	I1007 13:54:56.325728  807372 main.go:141] libmachine: (newest-cni-006310)   <vcpu>2</vcpu>
	I1007 13:54:56.325733  807372 main.go:141] libmachine: (newest-cni-006310)   <features>
	I1007 13:54:56.325739  807372 main.go:141] libmachine: (newest-cni-006310)     <acpi/>
	I1007 13:54:56.325743  807372 main.go:141] libmachine: (newest-cni-006310)     <apic/>
	I1007 13:54:56.325748  807372 main.go:141] libmachine: (newest-cni-006310)     <pae/>
	I1007 13:54:56.325755  807372 main.go:141] libmachine: (newest-cni-006310)     
	I1007 13:54:56.325760  807372 main.go:141] libmachine: (newest-cni-006310)   </features>
	I1007 13:54:56.325781  807372 main.go:141] libmachine: (newest-cni-006310)   <cpu mode='host-passthrough'>
	I1007 13:54:56.325786  807372 main.go:141] libmachine: (newest-cni-006310)   
	I1007 13:54:56.325796  807372 main.go:141] libmachine: (newest-cni-006310)   </cpu>
	I1007 13:54:56.325829  807372 main.go:141] libmachine: (newest-cni-006310)   <os>
	I1007 13:54:56.325854  807372 main.go:141] libmachine: (newest-cni-006310)     <type>hvm</type>
	I1007 13:54:56.325865  807372 main.go:141] libmachine: (newest-cni-006310)     <boot dev='cdrom'/>
	I1007 13:54:56.325890  807372 main.go:141] libmachine: (newest-cni-006310)     <boot dev='hd'/>
	I1007 13:54:56.325902  807372 main.go:141] libmachine: (newest-cni-006310)     <bootmenu enable='no'/>
	I1007 13:54:56.325911  807372 main.go:141] libmachine: (newest-cni-006310)   </os>
	I1007 13:54:56.325935  807372 main.go:141] libmachine: (newest-cni-006310)   <devices>
	I1007 13:54:56.325956  807372 main.go:141] libmachine: (newest-cni-006310)     <disk type='file' device='cdrom'>
	I1007 13:54:56.325972  807372 main.go:141] libmachine: (newest-cni-006310)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/boot2docker.iso'/>
	I1007 13:54:56.325988  807372 main.go:141] libmachine: (newest-cni-006310)       <target dev='hdc' bus='scsi'/>
	I1007 13:54:56.325995  807372 main.go:141] libmachine: (newest-cni-006310)       <readonly/>
	I1007 13:54:56.326001  807372 main.go:141] libmachine: (newest-cni-006310)     </disk>
	I1007 13:54:56.326055  807372 main.go:141] libmachine: (newest-cni-006310)     <disk type='file' device='disk'>
	I1007 13:54:56.326077  807372 main.go:141] libmachine: (newest-cni-006310)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:54:56.326093  807372 main.go:141] libmachine: (newest-cni-006310)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/newest-cni-006310/newest-cni-006310.rawdisk'/>
	I1007 13:54:56.326123  807372 main.go:141] libmachine: (newest-cni-006310)       <target dev='hda' bus='virtio'/>
	I1007 13:54:56.326136  807372 main.go:141] libmachine: (newest-cni-006310)     </disk>
	I1007 13:54:56.326148  807372 main.go:141] libmachine: (newest-cni-006310)     <interface type='network'>
	I1007 13:54:56.326162  807372 main.go:141] libmachine: (newest-cni-006310)       <source network='mk-newest-cni-006310'/>
	I1007 13:54:56.326184  807372 main.go:141] libmachine: (newest-cni-006310)       <model type='virtio'/>
	I1007 13:54:56.326197  807372 main.go:141] libmachine: (newest-cni-006310)     </interface>
	I1007 13:54:56.326206  807372 main.go:141] libmachine: (newest-cni-006310)     <interface type='network'>
	I1007 13:54:56.326219  807372 main.go:141] libmachine: (newest-cni-006310)       <source network='default'/>
	I1007 13:54:56.326229  807372 main.go:141] libmachine: (newest-cni-006310)       <model type='virtio'/>
	I1007 13:54:56.326239  807372 main.go:141] libmachine: (newest-cni-006310)     </interface>
	I1007 13:54:56.326250  807372 main.go:141] libmachine: (newest-cni-006310)     <serial type='pty'>
	I1007 13:54:56.326262  807372 main.go:141] libmachine: (newest-cni-006310)       <target port='0'/>
	I1007 13:54:56.326270  807372 main.go:141] libmachine: (newest-cni-006310)     </serial>
	I1007 13:54:56.326280  807372 main.go:141] libmachine: (newest-cni-006310)     <console type='pty'>
	I1007 13:54:56.326291  807372 main.go:141] libmachine: (newest-cni-006310)       <target type='serial' port='0'/>
	I1007 13:54:56.326303  807372 main.go:141] libmachine: (newest-cni-006310)     </console>
	I1007 13:54:56.326313  807372 main.go:141] libmachine: (newest-cni-006310)     <rng model='virtio'>
	I1007 13:54:56.326330  807372 main.go:141] libmachine: (newest-cni-006310)       <backend model='random'>/dev/random</backend>
	I1007 13:54:56.326344  807372 main.go:141] libmachine: (newest-cni-006310)     </rng>
	I1007 13:54:56.326353  807372 main.go:141] libmachine: (newest-cni-006310)     
	I1007 13:54:56.326363  807372 main.go:141] libmachine: (newest-cni-006310)     
	I1007 13:54:56.326372  807372 main.go:141] libmachine: (newest-cni-006310)   </devices>
	I1007 13:54:56.326381  807372 main.go:141] libmachine: (newest-cni-006310) </domain>
	I1007 13:54:56.326394  807372 main.go:141] libmachine: (newest-cni-006310) 
	I1007 13:54:56.330617  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:5a:2a:53 in network default
	I1007 13:54:56.331219  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:56.331252  807372 main.go:141] libmachine: (newest-cni-006310) Ensuring networks are active...
	I1007 13:54:56.331911  807372 main.go:141] libmachine: (newest-cni-006310) Ensuring network default is active
	I1007 13:54:56.332183  807372 main.go:141] libmachine: (newest-cni-006310) Ensuring network mk-newest-cni-006310 is active
	I1007 13:54:56.332759  807372 main.go:141] libmachine: (newest-cni-006310) Getting domain xml...
	I1007 13:54:56.333537  807372 main.go:141] libmachine: (newest-cni-006310) Creating domain...
	I1007 13:54:56.699571  807372 main.go:141] libmachine: (newest-cni-006310) Waiting to get IP...
	I1007 13:54:56.700337  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:56.700791  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:56.700821  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:56.700762  807395 retry.go:31] will retry after 201.157218ms: waiting for machine to come up
	I1007 13:54:56.903246  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:56.903783  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:56.903815  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:56.903719  807395 retry.go:31] will retry after 290.821316ms: waiting for machine to come up
	I1007 13:54:57.196254  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:57.196749  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:57.196777  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:57.196723  807395 retry.go:31] will retry after 305.405795ms: waiting for machine to come up
	I1007 13:54:57.504203  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:57.504745  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:57.504769  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:57.504682  807395 retry.go:31] will retry after 398.718046ms: waiting for machine to come up
	I1007 13:54:57.905282  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:57.905794  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:57.905817  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:57.905722  807395 retry.go:31] will retry after 741.654099ms: waiting for machine to come up
	I1007 13:54:58.648653  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:58.649225  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:58.649266  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:58.649174  807395 retry.go:31] will retry after 593.595078ms: waiting for machine to come up
	I1007 13:54:59.244117  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:54:59.244635  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:54:59.244663  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:54:59.244589  807395 retry.go:31] will retry after 1.066533378s: waiting for machine to come up
	I1007 13:55:00.313065  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:00.313485  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:00.313528  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:00.313442  807395 retry.go:31] will retry after 986.38626ms: waiting for machine to come up
	I1007 13:55:01.301284  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:01.301794  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:01.301826  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:01.301742  807395 retry.go:31] will retry after 1.173678303s: waiting for machine to come up
	I1007 13:55:02.477438  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:02.478208  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:02.478234  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:02.478147  807395 retry.go:31] will retry after 1.406397066s: waiting for machine to come up
	I1007 13:55:03.885917  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:03.886483  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:03.886512  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:03.886433  807395 retry.go:31] will retry after 2.140502362s: waiting for machine to come up
	I1007 13:55:06.028528  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:06.029074  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:06.029111  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:06.028983  807395 retry.go:31] will retry after 2.908571064s: waiting for machine to come up
	I1007 13:55:08.939089  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:08.939457  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:08.939479  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:08.939410  807395 retry.go:31] will retry after 3.963136636s: waiting for machine to come up
	I1007 13:55:12.905630  807372 main.go:141] libmachine: (newest-cni-006310) DBG | domain newest-cni-006310 has defined MAC address 52:54:00:d7:7d:b5 in network mk-newest-cni-006310
	I1007 13:55:12.906090  807372 main.go:141] libmachine: (newest-cni-006310) DBG | unable to find current IP address of domain newest-cni-006310 in network mk-newest-cni-006310
	I1007 13:55:12.906112  807372 main.go:141] libmachine: (newest-cni-006310) DBG | I1007 13:55:12.906058  807395 retry.go:31] will retry after 3.836594403s: waiting for machine to come up
	
	
	==> CRI-O <==
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.484631310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309316484606051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cbf889f-5b70-45f2-82e1-4dfd3db23e02 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.485235762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e88a346d-9b0c-40fa-957c-928d48e7a218 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.485313531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e88a346d-9b0c-40fa-957c-928d48e7a218 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.485551614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e88a346d-9b0c-40fa-957c-928d48e7a218 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.528629387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c8e2605-67a0-47d5-a48b-30228678dbeb name=/runtime.v1.RuntimeService/Version
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.528707625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c8e2605-67a0-47d5-a48b-30228678dbeb name=/runtime.v1.RuntimeService/Version
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.529965185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ff0641b-7265-456c-84cc-3a835ff2542d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.530520290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309316530494589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ff0641b-7265-456c-84cc-3a835ff2542d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.531198892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ef95ce9-3732-4bb4-9a2b-e8adb17103b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.531253516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ef95ce9-3732-4bb4-9a2b-e8adb17103b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.531464916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ef95ce9-3732-4bb4-9a2b-e8adb17103b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.568891983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbfc72b3-95f1-4b13-92cd-93002ab71b8b name=/runtime.v1.RuntimeService/Version
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.568975519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbfc72b3-95f1-4b13-92cd-93002ab71b8b name=/runtime.v1.RuntimeService/Version
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.570619885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46b2ef1f-331c-40d0-abf8-e11eca20a15c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.570966682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309316570944583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46b2ef1f-331c-40d0-abf8-e11eca20a15c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.571736522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=379f3a5b-d6ae-4a43-9299-62a26e551f91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.571917439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=379f3a5b-d6ae-4a43-9299-62a26e551f91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.572211022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=379f3a5b-d6ae-4a43-9299-62a26e551f91 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.609898740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e0bd53e-971e-4d92-a478-a1fd56559590 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.609997705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e0bd53e-971e-4d92-a478-a1fd56559590 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.617395646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c159681-4fac-4764-9946-cc1c3cedaa87 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.618495788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309316618414312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c159681-4fac-4764-9946-cc1c3cedaa87 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.619901844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b1913db-80c5-41b2-8c73-4642f8d19499 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.620049010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b1913db-80c5-41b2-8c73-4642f8d19499 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:55:16 no-preload-016701 crio[711]: time="2024-10-07 13:55:16.620726514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670,PodSandboxId:613ca80cd181348bc25ccb2e5549fe4136cb32474888e56bf637b016cd2ccf9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308480067330316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d1068f-0542-4c9d-a6d0-75fcca08cf58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df,PodSandboxId:551853bf60c131bc75f8d0c4e34d5813d51ff7b2d5d7e321d23d697eb68fe410,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728308479744267019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bjqg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba601e18-7fb7-4ad6-84ad-7480846bf394,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30,PodSandboxId:260b8ee5c8454131a031979a37438698e7c3c1eb43b13946d4899e787f379f8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479796759632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qq4hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d780dda-6153-47aa-95b0-88f5674dabf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be,PodSandboxId:2353a0e2ee0b104f02ca0b2a41a94151ef214300327b177536011120979753b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308479674536182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pdnlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 438ffc56-51bd-4100-9d6d-50b06b6bc15
9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d,PodSandboxId:c22c6a87ee1d847a87e60cda0f87c66c4bf994530ef70a7edae54294f368a77f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308468042973301,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd7b3fba26f2a91993ea00cf217984e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae,PodSandboxId:2a30d991008421ecd9845a2720fd4e6f608f929295091d80148e4369bbc53fcd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308468011362038,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 345aca5201bc3cf779e71ae01ed35606,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350,PodSandboxId:bc94d469c673e32758642eabfdca7b0fe4421e0809b9b3a0dfa4fe765b188804,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308467990849173,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc,PodSandboxId:fa87f639f0782a7f79ac3d6893eb575a220850f750ad86b097d881b3adb4bbd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308467999748985,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d487dd4c9268707a05bbb2d62dce3cce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0,PodSandboxId:3891b3950255129c29777172428c1263ae8f16e77670a9bab168ab0c2020fc4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308179118514427,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-016701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1678a3684946ed69fa7cc76e1b5fc5f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b1913db-80c5-41b2-8c73-4642f8d19499 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94edaab72692f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   613ca80cd1813       storage-provisioner
	77f4235b3f737       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 minutes ago      Running             coredns                   0                   260b8ee5c8454       coredns-7c65d6cfc9-qq4hc
	3b49e546c6c3a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   13 minutes ago      Running             kube-proxy                0                   551853bf60c13       kube-proxy-bjqg2
	d0155669cedd9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 minutes ago      Running             coredns                   0                   2353a0e2ee0b1       coredns-7c65d6cfc9-pdnlq
	caf6629f0f9a5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   c22c6a87ee1d8       kube-scheduler-no-preload-016701
	2b732f5571fae       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   2                   2a30d99100842       kube-controller-manager-no-preload-016701
	2fc99bea0fa86       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   fa87f639f0782       etcd-no-preload-016701
	abfd5843e8f3f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Running             kube-apiserver            2                   bc94d469c673e       kube-apiserver-no-preload-016701
	c94ba6e728b7a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   18 minutes ago      Exited              kube-apiserver            1                   3891b39502551       kube-apiserver-no-preload-016701
	
	
	==> coredns [77f4235b3f737d23eceb6eef24189e51d55be580c55bc5a4326182bbde74de30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d0155669cedd961a4567462125531f04bc1a9fc25c237f1cd6b9e15b56b7a5be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-016701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-016701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=no-preload-016701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:41:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-016701
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:55:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:51:36 +0000   Mon, 07 Oct 2024 13:41:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:51:36 +0000   Mon, 07 Oct 2024 13:41:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:51:36 +0000   Mon, 07 Oct 2024 13:41:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:51:36 +0000   Mon, 07 Oct 2024 13:41:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    no-preload-016701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2608db7ca5142dda5055018b77ff816
	  System UUID:                a2608db7-ca51-42dd-a505-5018b77ff816
	  Boot ID:                    a7bb47b5-1411-4ce0-b484-4aa4ef503a72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-pdnlq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-qq4hc                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-no-preload-016701                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-016701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-016701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-bjqg2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-no-preload-016701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-s7qkh              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-016701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-016701 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-016701 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-016701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-016701 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-016701 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-016701 event: Registered Node no-preload-016701 in Controller
	
	
	==> dmesg <==
	[  +0.066356] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051185] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.480495] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.873885] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.623457] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.662699] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.067510] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077968] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.205696] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.162708] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.374861] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[Oct 7 13:36] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.068683] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.593849] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +4.590762] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.039901] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 7 13:41] systemd-fstab-generator[3005]: Ignoring "noauto" option for root device
	[  +0.060474] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.994875] systemd-fstab-generator[3326]: Ignoring "noauto" option for root device
	[  +0.077415] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.862627] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.631708] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.619775] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [2fc99bea0fa866a84230b20ce2004b8c2809b2c750c9e384824c3b46a65abcbc] <==
	{"level":"info","ts":"2024-10-07T13:41:08.364580Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6a8c9de3121f6040","initial-advertise-peer-urls":["https://192.168.39.197:2380"],"listen-peer-urls":["https://192.168.39.197:2380"],"advertise-client-urls":["https://192.168.39.197:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.197:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T13:41:08.364640Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T13:41:09.125248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T13:41:09.125309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T13:41:09.125332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 received MsgPreVoteResp from 6a8c9de3121f6040 at term 1"}
	{"level":"info","ts":"2024-10-07T13:41:09.125343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.125359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 received MsgVoteResp from 6a8c9de3121f6040 at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.125368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a8c9de3121f6040 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.125375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a8c9de3121f6040 elected leader 6a8c9de3121f6040 at term 2"}
	{"level":"info","ts":"2024-10-07T13:41:09.129314Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:41:09.133439Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6a8c9de3121f6040","local-member-attributes":"{Name:no-preload-016701 ClientURLs:[https://192.168.39.197:2379]}","request-path":"/0/members/6a8c9de3121f6040/attributes","cluster-id":"7da2d91c76c1be47","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T13:41:09.133497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:41:09.133933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T13:41:09.134658Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T13:41:09.141527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T13:41:09.149231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T13:41:09.149311Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T13:41:09.149453Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T13:41:09.152338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.197:2379"}
	{"level":"info","ts":"2024-10-07T13:41:09.152482Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7da2d91c76c1be47","local-member-id":"6a8c9de3121f6040","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:41:09.152568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:41:09.152617Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:51:09.210731Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-10-07T13:51:09.220216Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":724,"took":"9.146613ms","hash":1787626567,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2367488,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-07T13:51:09.220287Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1787626567,"revision":724,"compact-revision":-1}
	
	
	==> kernel <==
	 13:55:17 up 19 min,  0 users,  load average: 0.01, 0.10, 0.09
	Linux no-preload-016701 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [abfd5843e8f3fc5229f654448b293295972686818c10164cd84456e333f29350] <==
	W1007 13:51:11.899447       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:51:11.899573       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:51:11.900581       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:51:11.900621       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:52:11.901385       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:52:11.901533       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1007 13:52:11.901573       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:52:11.901590       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1007 13:52:11.902705       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:52:11.902788       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:54:11.903860       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:11.903996       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1007 13:54:11.904086       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:54:11.904202       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:54:11.905170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:54:11.905241       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c94ba6e728b7a76febf9d7ead6b49a7df2859d10df8898b4b1ae9c663b198ef0] <==
	W1007 13:40:59.574347       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.695446       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.697968       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.777385       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.815761       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.952546       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:40:59.959361       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:00.001206       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:00.096443       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:03.533552       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:03.591429       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.007735       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.021500       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.056038       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.179775       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.232797       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.240431       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.260603       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.402427       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.428167       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.500935       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.505520       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.556775       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.602630       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:41:04.659326       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2b732f5571fae4f95f0cfd6bf898915be7004b7a82e3df277c5429a4d2b3fdae] <==
	E1007 13:49:47.947590       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:49:48.418475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:50:17.954603       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:50:18.426842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:50:47.961307       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:50:48.435305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:51:17.967577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:51:18.443780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:51:36.414161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-016701"
	E1007 13:51:47.975694       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:51:48.453320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:52:16.838912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="219.03µs"
	E1007 13:52:17.982269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:52:18.467936       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:52:28.837740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="142.131µs"
	E1007 13:52:47.989556       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:52:48.478211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:53:17.996615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:53:18.486698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:53:48.004331       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:53:48.495917       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:54:18.013907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:54:18.507036       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:54:48.020694       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:54:48.515372       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3b49e546c6c3ae144a18c00c270a567d90f9d9539e967470385713f0bb5d48df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 13:41:20.397358       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 13:41:20.413806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.197"]
	E1007 13:41:20.415592       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:41:20.517857       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 13:41:20.517896       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 13:41:20.517929       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:41:20.547522       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:41:20.547707       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:41:20.547716       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:41:20.551868       1 config.go:199] "Starting service config controller"
	I1007 13:41:20.551967       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:41:20.552300       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:41:20.552401       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:41:20.555504       1 config.go:328] "Starting node config controller"
	I1007 13:41:20.555569       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:41:20.652826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:41:20.652883       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:41:20.655645       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [caf6629f0f9a5492fd649a2e48052ffbd293847612d549c716b2e0520723446d] <==
	W1007 13:41:11.738542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:11.738557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.803496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:11.803727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.857746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:41:11.857803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.867872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 13:41:11.867928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.873200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:41:11.873353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:11.967351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:41:11.967404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.087217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:12.087840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.144413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:41:12.145236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.145519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:41:12.145607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.149362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:41:12.149440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.254884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:41:12.255162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:41:12.453028       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:41:12.453079       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 13:41:14.315519       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:54:13 no-preload-016701 kubelet[3333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 13:54:14 no-preload-016701 kubelet[3333]: E1007 13:54:14.056300    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309254055652505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:14 no-preload-016701 kubelet[3333]: E1007 13:54:14.056488    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309254055652505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:18 no-preload-016701 kubelet[3333]: E1007 13:54:18.820478    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:54:24 no-preload-016701 kubelet[3333]: E1007 13:54:24.058043    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309264057749076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:24 no-preload-016701 kubelet[3333]: E1007 13:54:24.058394    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309264057749076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:31 no-preload-016701 kubelet[3333]: E1007 13:54:31.822539    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:54:34 no-preload-016701 kubelet[3333]: E1007 13:54:34.060583    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309274060030740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:34 no-preload-016701 kubelet[3333]: E1007 13:54:34.061017    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309274060030740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:44 no-preload-016701 kubelet[3333]: E1007 13:54:44.063195    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309284062536992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:44 no-preload-016701 kubelet[3333]: E1007 13:54:44.063484    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309284062536992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:45 no-preload-016701 kubelet[3333]: E1007 13:54:45.820808    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:54:54 no-preload-016701 kubelet[3333]: E1007 13:54:54.065273    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309294064856383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:54 no-preload-016701 kubelet[3333]: E1007 13:54:54.065329    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309294064856383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:54:58 no-preload-016701 kubelet[3333]: E1007 13:54:58.819822    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:55:04 no-preload-016701 kubelet[3333]: E1007 13:55:04.070301    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309304069799157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:04 no-preload-016701 kubelet[3333]: E1007 13:55:04.070348    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309304069799157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:12 no-preload-016701 kubelet[3333]: E1007 13:55:12.819287    3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-s7qkh" podUID="421db538-caa5-46ae-85bb-7c70aea43877"
	Oct 07 13:55:13 no-preload-016701 kubelet[3333]: E1007 13:55:13.835695    3333 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 13:55:13 no-preload-016701 kubelet[3333]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 13:55:13 no-preload-016701 kubelet[3333]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 13:55:13 no-preload-016701 kubelet[3333]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 13:55:13 no-preload-016701 kubelet[3333]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 13:55:14 no-preload-016701 kubelet[3333]: E1007 13:55:14.072254    3333 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309314071934004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:55:14 no-preload-016701 kubelet[3333]: E1007 13:55:14.072281    3333 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309314071934004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [94edaab72692f2a55b66576ea60555be36e29c6cce82a425db73e0d3d2e7c670] <==
	I1007 13:41:20.450738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:41:20.467512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:41:20.468708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:41:20.493219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:41:20.493408       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-016701_0c04ee52-62d5-4d48-9f69-736860be3cc8!
	I1007 13:41:20.497597       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c02c4af-e407-4666-a147-f0763dc9f6d3", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-016701_0c04ee52-62d5-4d48-9f69-736860be3cc8 became leader
	I1007 13:41:20.594321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-016701_0c04ee52-62d5-4d48-9f69-736860be3cc8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016701 -n no-preload-016701
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-016701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-s7qkh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-016701 describe pod metrics-server-6867b74b74-s7qkh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-016701 describe pod metrics-server-6867b74b74-s7qkh: exit status 1 (70.953109ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-s7qkh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-016701 describe pod metrics-server-6867b74b74-s7qkh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (290.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (124.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
E1007 13:53:16.775592  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.103:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.103:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (261.305868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-120978" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-120978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-120978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.392µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-120978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (246.352894ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-120978 logs -n 25
E1007 13:54:53.448660  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-120978 logs -n 25: (1.020510518s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:25 UTC | 07 Oct 24 13:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-625039                           | kubernetes-upgrade-625039    | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:26 UTC |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:26 UTC | 07 Oct 24 13:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:30 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-016701             | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-653322            | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC | 07 Oct 24 13:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-120978        | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016701                  | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016701                                   | no-preload-016701            | jenkins | v1.34.0 | 07 Oct 24 13:29 UTC | 07 Oct 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-653322                 | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-653322                                  | embed-certs-653322           | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-004876                              | cert-expiration-004876       | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	| delete  | -p                                                     | disable-driver-mounts-288417 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | disable-driver-mounts-288417                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:35 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-120978             | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-120978                              | old-k8s-version-120978       | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-489319  | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:36 UTC |                     |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-489319       | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-489319 | jenkins | v1.34.0 | 07 Oct 24 13:38 UTC | 07 Oct 24 13:48 UTC |
	|         | default-k8s-diff-port-489319                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:38:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:38:32.108474  802960 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:38:32.108648  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108659  802960 out.go:358] Setting ErrFile to fd 2...
	I1007 13:38:32.108665  802960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:38:32.108864  802960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:38:32.109477  802960 out.go:352] Setting JSON to false
	I1007 13:38:32.110672  802960 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12061,"bootTime":1728296251,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:38:32.110773  802960 start.go:139] virtualization: kvm guest
	I1007 13:38:32.113566  802960 out.go:177] * [default-k8s-diff-port-489319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:38:32.115580  802960 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:38:32.115627  802960 notify.go:220] Checking for updates...
	I1007 13:38:32.118464  802960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:38:32.120173  802960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:38:32.121799  802960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:38:32.123382  802960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:38:32.125020  802960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:38:29.209336  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:31.212514  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:32.126861  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:38:32.127255  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.127337  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.143671  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1007 13:38:32.144158  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.144820  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.144844  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.145206  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.145416  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.145655  802960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:38:32.146010  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.146112  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.161508  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1007 13:38:32.162082  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.162517  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.162541  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.162886  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.163112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.200281  802960 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:38:32.201380  802960 start.go:297] selected driver: kvm2
	I1007 13:38:32.201393  802960 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.201499  802960 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:38:32.202260  802960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.202353  802960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:38:32.218742  802960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:38:32.219129  802960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:38:32.219168  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:38:32.219221  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:38:32.219254  802960 start.go:340] cluster config:
	{Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:38:32.219380  802960 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:38:32.222273  802960 out.go:177] * Starting "default-k8s-diff-port-489319" primary control-plane node in "default-k8s-diff-port-489319" cluster
	I1007 13:38:32.223750  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:38:32.223801  802960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:38:32.223816  802960 cache.go:56] Caching tarball of preloaded images
	I1007 13:38:32.223891  802960 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:38:32.223901  802960 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:38:32.223997  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:38:32.224208  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:38:32.224280  802960 start.go:364] duration metric: took 38.73µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:38:32.224297  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:38:32.224303  802960 fix.go:54] fixHost starting: 
	I1007 13:38:32.224637  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:38:32.224674  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:38:32.239368  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I1007 13:38:32.239869  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:38:32.240386  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:38:32.240409  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:38:32.240813  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:38:32.241063  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.241228  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:38:32.243196  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Running err=<nil>
	W1007 13:38:32.243217  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:38:32.245881  802960 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-489319" VM ...
	I1007 13:38:30.514797  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:33.014487  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:30.891736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:30.891810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:30.926900  800812 cri.go:89] found id: ""
	I1007 13:38:30.926934  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.926945  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:30.926953  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:30.927020  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:30.962704  800812 cri.go:89] found id: ""
	I1007 13:38:30.962742  800812 logs.go:282] 0 containers: []
	W1007 13:38:30.962760  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:30.962769  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:30.962839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:31.000947  800812 cri.go:89] found id: ""
	I1007 13:38:31.000986  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.000999  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:31.001009  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:31.001079  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:31.040687  800812 cri.go:89] found id: ""
	I1007 13:38:31.040734  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.040743  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:31.040750  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:31.040808  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:31.077841  800812 cri.go:89] found id: ""
	I1007 13:38:31.077872  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.077891  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:31.077900  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:31.077975  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:31.128590  800812 cri.go:89] found id: ""
	I1007 13:38:31.128625  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.128638  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:31.128736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:31.128947  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:31.170110  800812 cri.go:89] found id: ""
	I1007 13:38:31.170140  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.170149  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:31.170157  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:31.170231  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:31.229262  800812 cri.go:89] found id: ""
	I1007 13:38:31.229297  800812 logs.go:282] 0 containers: []
	W1007 13:38:31.229310  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:31.229327  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:31.229343  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:31.281680  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:31.281727  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:31.296076  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:31.296111  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:31.367443  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:31.367468  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:31.367488  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:31.449882  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:31.449933  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:33.993958  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:34.007064  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:34.007150  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:34.043479  800812 cri.go:89] found id: ""
	I1007 13:38:34.043517  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.043529  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:34.043537  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:34.043609  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:34.080953  800812 cri.go:89] found id: ""
	I1007 13:38:34.081006  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.081019  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:34.081028  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:34.081100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:34.117708  800812 cri.go:89] found id: ""
	I1007 13:38:34.117741  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.117749  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:34.117756  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:34.117823  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:34.154457  800812 cri.go:89] found id: ""
	I1007 13:38:34.154487  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.154499  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:34.154507  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:34.154586  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:34.192037  800812 cri.go:89] found id: ""
	I1007 13:38:34.192070  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.192080  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:34.192088  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:34.192159  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:34.230404  800812 cri.go:89] found id: ""
	I1007 13:38:34.230441  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.230453  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:34.230461  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:34.230529  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:34.266650  800812 cri.go:89] found id: ""
	I1007 13:38:34.266712  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.266726  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:34.266736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:34.266832  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:34.302731  800812 cri.go:89] found id: ""
	I1007 13:38:34.302767  800812 logs.go:282] 0 containers: []
	W1007 13:38:34.302784  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:34.302807  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:34.302828  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:34.377367  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:34.377400  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:34.377417  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:34.453185  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:34.453232  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:34.498235  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:34.498269  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:34.548177  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:34.548224  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:32.247486  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:38:32.247524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:38:32.247949  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:38:32.250961  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:38:32.251539  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:38:32.251823  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:38:32.252088  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252375  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:38:32.252605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:38:32.252944  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:38:32.253182  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:38:32.253197  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:38:35.122367  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:33.709093  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.709691  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:35.514611  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:38.014557  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:37.065875  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:37.079772  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:37.079868  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:37.115654  800812 cri.go:89] found id: ""
	I1007 13:38:37.115685  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.115696  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:37.115709  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:37.115777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:37.156963  800812 cri.go:89] found id: ""
	I1007 13:38:37.157001  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.157013  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:37.157022  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:37.157080  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:37.199210  800812 cri.go:89] found id: ""
	I1007 13:38:37.199243  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.199254  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:37.199263  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:37.199336  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:37.240823  800812 cri.go:89] found id: ""
	I1007 13:38:37.240868  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.240880  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:37.240889  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:37.240958  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:37.289164  800812 cri.go:89] found id: ""
	I1007 13:38:37.289192  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.289202  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:37.289210  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:37.289276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:37.330630  800812 cri.go:89] found id: ""
	I1007 13:38:37.330660  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.330669  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:37.330675  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:37.330731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:37.372401  800812 cri.go:89] found id: ""
	I1007 13:38:37.372431  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.372439  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:37.372446  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:37.372500  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:37.413585  800812 cri.go:89] found id: ""
	I1007 13:38:37.413617  800812 logs.go:282] 0 containers: []
	W1007 13:38:37.413625  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:37.413634  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:37.413646  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:37.458433  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:37.458471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:37.512720  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:37.512769  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:37.527774  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:37.527813  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:37.605381  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:37.605408  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:37.605422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.182809  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:40.196597  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:40.196671  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:40.236687  800812 cri.go:89] found id: ""
	I1007 13:38:40.236726  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.236738  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:40.236746  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:40.236814  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:40.271432  800812 cri.go:89] found id: ""
	I1007 13:38:40.271470  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.271479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:40.271485  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:40.271548  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:40.308972  800812 cri.go:89] found id: ""
	I1007 13:38:40.309014  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.309026  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:40.309044  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:40.309115  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:40.345363  800812 cri.go:89] found id: ""
	I1007 13:38:40.345404  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.345415  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:40.345424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:40.345506  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:40.378426  800812 cri.go:89] found id: ""
	I1007 13:38:40.378457  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.378465  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:40.378471  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:40.378525  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:40.415312  800812 cri.go:89] found id: ""
	I1007 13:38:40.415349  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.415370  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:40.415379  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:40.415448  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:40.452679  800812 cri.go:89] found id: ""
	I1007 13:38:40.452715  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.452727  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:40.452735  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:40.452810  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:40.490328  800812 cri.go:89] found id: ""
	I1007 13:38:40.490362  800812 logs.go:282] 0 containers: []
	W1007 13:38:40.490371  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:40.490382  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:40.490395  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:40.581489  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:40.581551  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:40.626827  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:40.626865  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:40.680180  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:40.680226  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:40.696284  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:40.696316  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:40.777722  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:38.198306  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:37.710573  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.210415  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:40.516522  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.013328  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:43.278317  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:43.292099  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:43.292180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:43.329487  800812 cri.go:89] found id: ""
	I1007 13:38:43.329518  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.329527  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:43.329534  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:43.329593  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:43.367622  800812 cri.go:89] found id: ""
	I1007 13:38:43.367653  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.367665  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:43.367674  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:43.367750  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:43.403439  800812 cri.go:89] found id: ""
	I1007 13:38:43.403477  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.403491  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:43.403499  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:43.403577  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:43.442974  800812 cri.go:89] found id: ""
	I1007 13:38:43.443019  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.443029  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:43.443037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:43.443102  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:43.479975  800812 cri.go:89] found id: ""
	I1007 13:38:43.480005  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.480013  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:43.480020  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:43.480091  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:43.521645  800812 cri.go:89] found id: ""
	I1007 13:38:43.521679  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.521695  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:43.521704  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:43.521763  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:43.558574  800812 cri.go:89] found id: ""
	I1007 13:38:43.558605  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.558614  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:43.558620  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:43.558687  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:43.594054  800812 cri.go:89] found id: ""
	I1007 13:38:43.594086  800812 logs.go:282] 0 containers: []
	W1007 13:38:43.594097  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:43.594111  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:43.594128  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:43.673587  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:43.673634  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:43.717642  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:43.717673  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:43.771524  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:43.771586  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:43.786726  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:43.786764  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:43.858645  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:44.274468  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:42.709396  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:44.709744  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.711052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:45.015094  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:47.513659  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:49.515994  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:46.359453  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:46.373401  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:46.373490  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:46.414387  800812 cri.go:89] found id: ""
	I1007 13:38:46.414416  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.414425  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:46.414432  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:46.414498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:46.451704  800812 cri.go:89] found id: ""
	I1007 13:38:46.451739  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.451751  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:46.451761  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:46.451822  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:46.487607  800812 cri.go:89] found id: ""
	I1007 13:38:46.487646  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.487657  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:46.487666  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:46.487747  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:46.527080  800812 cri.go:89] found id: ""
	I1007 13:38:46.527113  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.527121  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:46.527128  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:46.527182  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:46.565979  800812 cri.go:89] found id: ""
	I1007 13:38:46.566007  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.566016  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:46.566037  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:46.566095  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:46.604631  800812 cri.go:89] found id: ""
	I1007 13:38:46.604665  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.604674  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:46.604683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:46.604751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:46.643618  800812 cri.go:89] found id: ""
	I1007 13:38:46.643649  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.643660  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:46.643669  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:46.643741  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:46.686777  800812 cri.go:89] found id: ""
	I1007 13:38:46.686812  800812 logs.go:282] 0 containers: []
	W1007 13:38:46.686824  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:46.686837  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:46.686853  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:46.769689  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:46.769749  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:46.810903  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:46.810934  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:46.859958  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:46.860007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:46.874867  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:46.874902  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:46.945267  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.446436  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:49.460403  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:49.460493  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:49.498234  800812 cri.go:89] found id: ""
	I1007 13:38:49.498278  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.498290  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:49.498302  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:49.498376  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:49.539337  800812 cri.go:89] found id: ""
	I1007 13:38:49.539374  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.539386  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:49.539395  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:49.539465  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:49.580365  800812 cri.go:89] found id: ""
	I1007 13:38:49.580404  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.580415  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:49.580424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:49.580498  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:49.624591  800812 cri.go:89] found id: ""
	I1007 13:38:49.624627  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.624638  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:49.624652  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:49.624726  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:49.661718  800812 cri.go:89] found id: ""
	I1007 13:38:49.661750  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.661762  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:49.661776  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:49.661846  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:49.698356  800812 cri.go:89] found id: ""
	I1007 13:38:49.698389  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.698402  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:49.698410  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:49.698477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:49.735453  800812 cri.go:89] found id: ""
	I1007 13:38:49.735486  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.735497  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:49.735505  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:49.735578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:49.779530  800812 cri.go:89] found id: ""
	I1007 13:38:49.779558  800812 logs.go:282] 0 containers: []
	W1007 13:38:49.779567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:49.779577  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:49.779593  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:49.794020  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:49.794067  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:49.868060  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:49.868093  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:49.868110  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:49.946554  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:49.946599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:49.990212  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:49.990251  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:47.346399  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:49.208303  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:51.209295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.013939  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:54.514863  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:52.543303  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:52.559466  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:52.559535  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:52.601977  800812 cri.go:89] found id: ""
	I1007 13:38:52.602008  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.602018  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:52.602043  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:52.602104  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:52.640954  800812 cri.go:89] found id: ""
	I1007 13:38:52.640985  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.641005  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:52.641012  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:52.641067  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:52.682075  800812 cri.go:89] found id: ""
	I1007 13:38:52.682105  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.682113  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:52.682119  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:52.682184  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:52.722957  800812 cri.go:89] found id: ""
	I1007 13:38:52.722986  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.722994  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:52.723006  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:52.723062  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:52.764074  800812 cri.go:89] found id: ""
	I1007 13:38:52.764110  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.764122  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:52.764131  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:52.764210  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:52.805802  800812 cri.go:89] found id: ""
	I1007 13:38:52.805830  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.805838  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:52.805844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:52.805912  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:52.846116  800812 cri.go:89] found id: ""
	I1007 13:38:52.846148  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.846157  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:52.846164  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:52.846226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:52.888666  800812 cri.go:89] found id: ""
	I1007 13:38:52.888703  800812 logs.go:282] 0 containers: []
	W1007 13:38:52.888719  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:52.888733  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:52.888750  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:52.968131  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:52.968177  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:53.012585  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:53.012624  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:53.066638  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:53.066692  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:53.081227  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:53.081264  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:53.156955  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:55.657820  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:55.672261  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:55.672349  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:55.713096  800812 cri.go:89] found id: ""
	I1007 13:38:55.713124  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.713135  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:55.713143  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:55.713211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:55.748413  800812 cri.go:89] found id: ""
	I1007 13:38:55.748447  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.748457  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:55.748465  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:55.748534  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:55.781376  800812 cri.go:89] found id: ""
	I1007 13:38:55.781412  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.781424  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:55.781433  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:55.781502  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:55.817653  800812 cri.go:89] found id: ""
	I1007 13:38:55.817681  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.817690  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:55.817697  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:55.817767  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:55.853133  800812 cri.go:89] found id: ""
	I1007 13:38:55.853166  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.853177  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:55.853185  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:55.853255  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:53.426353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:56.498332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:38:53.709052  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.710245  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:57.014521  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:59.020215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:38:55.891659  800812 cri.go:89] found id: ""
	I1007 13:38:55.891691  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.891720  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:55.891730  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:55.891794  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:55.929345  800812 cri.go:89] found id: ""
	I1007 13:38:55.929373  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.929381  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:55.929388  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:55.929461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:55.963379  800812 cri.go:89] found id: ""
	I1007 13:38:55.963410  800812 logs.go:282] 0 containers: []
	W1007 13:38:55.963419  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:55.963428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:55.963444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:56.006795  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:56.006837  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:56.060896  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:56.060942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:56.076353  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:56.076394  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:56.157464  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:56.157492  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:56.157510  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.747936  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:38:58.761415  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:38:58.761489  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:38:58.795181  800812 cri.go:89] found id: ""
	I1007 13:38:58.795216  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.795226  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:38:58.795232  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:38:58.795291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:38:58.828749  800812 cri.go:89] found id: ""
	I1007 13:38:58.828785  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.828795  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:38:58.828802  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:38:58.828865  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:38:58.867195  800812 cri.go:89] found id: ""
	I1007 13:38:58.867234  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.867243  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:38:58.867251  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:38:58.867311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:38:58.905348  800812 cri.go:89] found id: ""
	I1007 13:38:58.905387  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.905398  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:38:58.905407  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:38:58.905477  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:38:58.940553  800812 cri.go:89] found id: ""
	I1007 13:38:58.940626  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.940655  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:38:58.940667  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:38:58.940751  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:38:58.976595  800812 cri.go:89] found id: ""
	I1007 13:38:58.976643  800812 logs.go:282] 0 containers: []
	W1007 13:38:58.976652  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:38:58.976662  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:38:58.976719  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:38:59.014478  800812 cri.go:89] found id: ""
	I1007 13:38:59.014512  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.014521  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:38:59.014527  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:38:59.014594  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:38:59.051337  800812 cri.go:89] found id: ""
	I1007 13:38:59.051367  800812 logs.go:282] 0 containers: []
	W1007 13:38:59.051378  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:38:59.051391  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:38:59.051408  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:38:59.091689  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:38:59.091733  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:38:59.144431  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:38:59.144477  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:38:59.159436  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:38:59.159471  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:38:59.256248  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:38:59.256277  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:38:59.256293  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:38:58.208916  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:00.210007  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.514807  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:04.015032  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:01.846247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:01.861309  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:01.861389  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:01.898079  800812 cri.go:89] found id: ""
	I1007 13:39:01.898117  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.898129  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:01.898138  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:01.898211  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:01.933905  800812 cri.go:89] found id: ""
	I1007 13:39:01.933940  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.933951  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:01.933960  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:01.934056  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:01.970522  800812 cri.go:89] found id: ""
	I1007 13:39:01.970552  800812 logs.go:282] 0 containers: []
	W1007 13:39:01.970563  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:01.970580  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:01.970653  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:02.014210  800812 cri.go:89] found id: ""
	I1007 13:39:02.014245  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.014257  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:02.014265  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:02.014329  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:02.052014  800812 cri.go:89] found id: ""
	I1007 13:39:02.052053  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.052065  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:02.052073  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:02.052144  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:02.089966  800812 cri.go:89] found id: ""
	I1007 13:39:02.089998  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.090007  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:02.090014  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:02.090105  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:02.125933  800812 cri.go:89] found id: ""
	I1007 13:39:02.125970  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.125982  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:02.125991  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:02.126092  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:02.163348  800812 cri.go:89] found id: ""
	I1007 13:39:02.163381  800812 logs.go:282] 0 containers: []
	W1007 13:39:02.163394  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:02.163405  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:02.163422  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:02.218311  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:02.218351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:02.233345  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:02.233381  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:02.308402  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:02.308425  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:02.308444  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:02.387161  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:02.387207  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:04.931535  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:04.954002  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:04.954100  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:04.994745  800812 cri.go:89] found id: ""
	I1007 13:39:04.994783  800812 logs.go:282] 0 containers: []
	W1007 13:39:04.994795  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:04.994803  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:04.994903  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:05.031041  800812 cri.go:89] found id: ""
	I1007 13:39:05.031070  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.031078  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:05.031085  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:05.031157  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:05.075737  800812 cri.go:89] found id: ""
	I1007 13:39:05.075780  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.075788  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:05.075794  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:05.075849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:05.108984  800812 cri.go:89] found id: ""
	I1007 13:39:05.109019  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.109030  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:05.109038  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:05.109096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:05.145667  800812 cri.go:89] found id: ""
	I1007 13:39:05.145699  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.145707  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:05.145724  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:05.145780  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:05.182742  800812 cri.go:89] found id: ""
	I1007 13:39:05.182772  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.182783  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:05.182791  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:05.182859  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:05.223674  800812 cri.go:89] found id: ""
	I1007 13:39:05.223721  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.223731  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:05.223737  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:05.223802  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:05.263516  800812 cri.go:89] found id: ""
	I1007 13:39:05.263555  800812 logs.go:282] 0 containers: []
	W1007 13:39:05.263567  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:05.263581  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:05.263599  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:05.345447  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:05.345493  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:05.386599  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:05.386635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:05.439367  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:05.439410  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:05.455636  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:05.455671  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:05.541166  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:05.618355  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:02.709614  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:05.211295  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:06.514215  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.515091  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:08.041406  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:08.056425  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:08.056514  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:08.094066  800812 cri.go:89] found id: ""
	I1007 13:39:08.094098  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.094106  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:08.094113  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:08.094180  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:08.136241  800812 cri.go:89] found id: ""
	I1007 13:39:08.136277  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.136289  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:08.136297  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:08.136368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:08.176917  800812 cri.go:89] found id: ""
	I1007 13:39:08.176949  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.176958  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:08.176964  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:08.177019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:08.215278  800812 cri.go:89] found id: ""
	I1007 13:39:08.215313  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.215324  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:08.215331  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:08.215386  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:08.256965  800812 cri.go:89] found id: ""
	I1007 13:39:08.257002  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.257014  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:08.257023  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:08.257093  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:08.294680  800812 cri.go:89] found id: ""
	I1007 13:39:08.294716  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.294726  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:08.294736  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:08.294792  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:08.332832  800812 cri.go:89] found id: ""
	I1007 13:39:08.332862  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.332871  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:08.332878  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:08.332931  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:08.369893  800812 cri.go:89] found id: ""
	I1007 13:39:08.369927  800812 logs.go:282] 0 containers: []
	W1007 13:39:08.369939  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:08.369960  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:08.369987  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:08.448286  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:08.448337  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:08.493839  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:08.493877  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:08.549319  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:08.549365  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:08.564175  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:08.564211  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:08.636651  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:08.690293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:07.709699  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:10.208983  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.014066  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:13.014936  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:11.137682  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:11.152844  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:11.152934  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:11.187265  800812 cri.go:89] found id: ""
	I1007 13:39:11.187301  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.187313  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:11.187322  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:11.187384  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:11.222721  800812 cri.go:89] found id: ""
	I1007 13:39:11.222760  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.222776  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:11.222783  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:11.222842  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:11.261731  800812 cri.go:89] found id: ""
	I1007 13:39:11.261765  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.261774  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:11.261781  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:11.261841  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:11.299511  800812 cri.go:89] found id: ""
	I1007 13:39:11.299541  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.299556  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:11.299563  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:11.299615  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:11.338737  800812 cri.go:89] found id: ""
	I1007 13:39:11.338776  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.338787  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:11.338793  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:11.338851  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:11.382231  800812 cri.go:89] found id: ""
	I1007 13:39:11.382267  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.382277  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:11.382284  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:11.382344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:11.436147  800812 cri.go:89] found id: ""
	I1007 13:39:11.436179  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.436188  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:11.436195  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:11.436258  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:11.477332  800812 cri.go:89] found id: ""
	I1007 13:39:11.477367  800812 logs.go:282] 0 containers: []
	W1007 13:39:11.477380  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:11.477392  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:11.477411  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:11.531842  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:11.531887  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:11.546074  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:11.546103  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:11.617435  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:11.617455  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:11.617470  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:11.703173  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:11.703227  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.249507  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:14.263655  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:14.263740  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:14.300339  800812 cri.go:89] found id: ""
	I1007 13:39:14.300372  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.300381  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:14.300388  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:14.300441  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:14.338791  800812 cri.go:89] found id: ""
	I1007 13:39:14.338836  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.338849  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:14.338873  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:14.338960  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:14.376537  800812 cri.go:89] found id: ""
	I1007 13:39:14.376570  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.376582  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:14.376590  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:14.376648  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:14.411933  800812 cri.go:89] found id: ""
	I1007 13:39:14.411969  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.411981  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:14.411990  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:14.412057  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:14.449007  800812 cri.go:89] found id: ""
	I1007 13:39:14.449049  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.449060  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:14.449069  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:14.449129  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:14.489459  800812 cri.go:89] found id: ""
	I1007 13:39:14.489495  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.489507  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:14.489516  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:14.489575  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:14.529717  800812 cri.go:89] found id: ""
	I1007 13:39:14.529747  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.529756  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:14.529765  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:14.529820  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:14.566093  800812 cri.go:89] found id: ""
	I1007 13:39:14.566122  800812 logs.go:282] 0 containers: []
	W1007 13:39:14.566129  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:14.566139  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:14.566156  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:14.640009  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:14.640037  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:14.640051  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:14.726151  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:14.726201  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:14.771158  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:14.771195  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:14.824599  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:14.824644  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:14.774418  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:12.209697  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:14.710276  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:15.514317  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.514843  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:17.339940  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:17.361437  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:17.361511  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:17.402518  800812 cri.go:89] found id: ""
	I1007 13:39:17.402555  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.402566  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:17.402575  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:17.402645  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:17.454422  800812 cri.go:89] found id: ""
	I1007 13:39:17.454460  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.454472  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:17.454480  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:17.454552  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:17.497017  800812 cri.go:89] found id: ""
	I1007 13:39:17.497049  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.497060  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:17.497070  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:17.497142  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:17.534352  800812 cri.go:89] found id: ""
	I1007 13:39:17.534389  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.534399  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:17.534406  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:17.534461  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:17.568185  800812 cri.go:89] found id: ""
	I1007 13:39:17.568216  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.568225  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:17.568232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:17.568291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:17.611138  800812 cri.go:89] found id: ""
	I1007 13:39:17.611171  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.611182  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:17.611191  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:17.611260  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:17.649494  800812 cri.go:89] found id: ""
	I1007 13:39:17.649527  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.649536  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:17.649544  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:17.649604  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:17.690104  800812 cri.go:89] found id: ""
	I1007 13:39:17.690140  800812 logs.go:282] 0 containers: []
	W1007 13:39:17.690153  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:17.690166  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:17.690183  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:17.763419  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:17.763450  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:17.763467  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:17.841000  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:17.841050  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:17.879832  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:17.879862  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:17.932754  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:17.932796  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.447864  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:20.462219  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:20.462301  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:20.499833  800812 cri.go:89] found id: ""
	I1007 13:39:20.499870  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.499881  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:20.499889  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:20.499990  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:20.538996  800812 cri.go:89] found id: ""
	I1007 13:39:20.539031  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.539043  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:20.539051  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:20.539132  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:20.575341  800812 cri.go:89] found id: ""
	I1007 13:39:20.575379  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.575391  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:20.575400  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:20.575470  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:20.613527  800812 cri.go:89] found id: ""
	I1007 13:39:20.613562  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.613572  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:20.613582  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:20.613657  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:20.650651  800812 cri.go:89] found id: ""
	I1007 13:39:20.650686  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.650699  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:20.650709  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:20.650769  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:20.689122  800812 cri.go:89] found id: ""
	I1007 13:39:20.689151  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.689160  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:20.689166  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:20.689220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:20.725242  800812 cri.go:89] found id: ""
	I1007 13:39:20.725275  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.725284  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:20.725290  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:20.725348  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:20.759949  800812 cri.go:89] found id: ""
	I1007 13:39:20.759988  800812 logs.go:282] 0 containers: []
	W1007 13:39:20.760000  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:20.760014  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:20.760028  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:20.804886  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:20.804922  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:20.857652  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:20.857700  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:20.872182  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:20.872215  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:39:17.842234  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:17.210309  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:19.210449  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:21.709672  800212 pod_ready.go:103] pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:20.014047  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:22.014646  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:24.015649  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	W1007 13:39:20.945413  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:20.945439  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:20.945455  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:23.521232  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:23.537035  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:23.537116  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:23.580100  800812 cri.go:89] found id: ""
	I1007 13:39:23.580141  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.580154  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:23.580162  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:23.580220  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:23.622271  800812 cri.go:89] found id: ""
	I1007 13:39:23.622302  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.622313  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:23.622321  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:23.622390  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:23.658290  800812 cri.go:89] found id: ""
	I1007 13:39:23.658320  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.658335  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:23.658341  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:23.658398  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:23.696510  800812 cri.go:89] found id: ""
	I1007 13:39:23.696543  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.696555  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:23.696564  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:23.696624  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:23.732913  800812 cri.go:89] found id: ""
	I1007 13:39:23.732947  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.732967  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:23.732974  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:23.733027  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:23.774502  800812 cri.go:89] found id: ""
	I1007 13:39:23.774540  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.774550  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:23.774557  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:23.774710  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:23.821217  800812 cri.go:89] found id: ""
	I1007 13:39:23.821258  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.821269  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:23.821278  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:23.821350  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:23.864327  800812 cri.go:89] found id: ""
	I1007 13:39:23.864361  800812 logs.go:282] 0 containers: []
	W1007 13:39:23.864373  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:23.864386  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:23.864404  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:23.918454  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:23.918505  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:23.933324  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:23.933363  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:24.015858  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:24.015879  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:24.015892  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:24.096557  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:24.096609  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:23.926328  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:26.994313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:24.203346  800212 pod_ready.go:82] duration metric: took 4m0.00074454s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" ...
	E1007 13:39:24.203420  800212 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mf5r4" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:39:24.203447  800212 pod_ready.go:39] duration metric: took 4m15.010484686s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:39:24.203483  800212 kubeadm.go:597] duration metric: took 4m22.534978235s to restartPrimaryControlPlane
	W1007 13:39:24.203568  800212 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:24.203597  800212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:26.018248  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:28.513858  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:26.638856  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:26.654921  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:26.654989  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:26.693714  800812 cri.go:89] found id: ""
	I1007 13:39:26.693747  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.693756  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:26.693764  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:26.693819  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:26.732730  800812 cri.go:89] found id: ""
	I1007 13:39:26.732762  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.732771  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:26.732778  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:26.732837  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:26.774239  800812 cri.go:89] found id: ""
	I1007 13:39:26.774272  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.774281  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:26.774288  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:26.774352  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:26.812547  800812 cri.go:89] found id: ""
	I1007 13:39:26.812597  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.812609  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:26.812619  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:26.812676  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:26.849472  800812 cri.go:89] found id: ""
	I1007 13:39:26.849501  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.849509  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:26.849515  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:26.849572  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:26.885935  800812 cri.go:89] found id: ""
	I1007 13:39:26.885965  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.885974  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:26.885981  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:26.886052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:26.920629  800812 cri.go:89] found id: ""
	I1007 13:39:26.920659  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.920668  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:26.920674  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:26.920731  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:26.959016  800812 cri.go:89] found id: ""
	I1007 13:39:26.959052  800812 logs.go:282] 0 containers: []
	W1007 13:39:26.959065  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:26.959079  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:26.959095  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:27.012308  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:27.012351  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:27.027559  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:27.027602  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:27.111043  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:27.111070  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:27.111086  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:27.194428  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:27.194476  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:29.738163  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:29.752869  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:29.752959  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:29.791071  800812 cri.go:89] found id: ""
	I1007 13:39:29.791102  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.791111  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:29.791128  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:29.791206  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:29.837148  800812 cri.go:89] found id: ""
	I1007 13:39:29.837194  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.837207  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:29.837217  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:29.837291  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:29.874334  800812 cri.go:89] found id: ""
	I1007 13:39:29.874371  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.874383  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:29.874391  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:29.874463  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:29.915799  800812 cri.go:89] found id: ""
	I1007 13:39:29.915835  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.915852  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:29.915861  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:29.915923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:29.954557  800812 cri.go:89] found id: ""
	I1007 13:39:29.954589  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.954598  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:29.954604  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:29.954661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:29.990873  800812 cri.go:89] found id: ""
	I1007 13:39:29.990912  800812 logs.go:282] 0 containers: []
	W1007 13:39:29.990925  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:29.990934  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:29.991019  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:30.031687  800812 cri.go:89] found id: ""
	I1007 13:39:30.031738  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.031751  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:30.031763  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:30.031872  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:30.071524  800812 cri.go:89] found id: ""
	I1007 13:39:30.071565  800812 logs.go:282] 0 containers: []
	W1007 13:39:30.071579  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:30.071594  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:30.071614  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:30.085558  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:30.085591  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:30.162897  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:30.162922  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:30.162935  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:30.244979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:30.245029  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:30.285065  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:30.285098  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:30.513894  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:33.013867  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:32.838701  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:32.852755  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:32.852839  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:32.890012  800812 cri.go:89] found id: ""
	I1007 13:39:32.890067  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.890079  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:32.890088  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:32.890156  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:32.928467  800812 cri.go:89] found id: ""
	I1007 13:39:32.928499  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.928508  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:32.928517  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:32.928578  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:32.964908  800812 cri.go:89] found id: ""
	I1007 13:39:32.964944  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.964956  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:32.964965  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:32.965096  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:32.999714  800812 cri.go:89] found id: ""
	I1007 13:39:32.999747  800812 logs.go:282] 0 containers: []
	W1007 13:39:32.999773  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:32.999782  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:32.999849  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:33.037889  800812 cri.go:89] found id: ""
	I1007 13:39:33.037924  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.037934  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:33.037946  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:33.038015  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:33.076192  800812 cri.go:89] found id: ""
	I1007 13:39:33.076226  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.076234  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:33.076241  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:33.076311  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:33.112402  800812 cri.go:89] found id: ""
	I1007 13:39:33.112442  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.112455  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:33.112463  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:33.112527  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:33.151872  800812 cri.go:89] found id: ""
	I1007 13:39:33.151905  800812 logs.go:282] 0 containers: []
	W1007 13:39:33.151916  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:33.151927  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:33.151942  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:33.203529  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:33.203572  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:33.220050  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:33.220097  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:33.304000  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:33.304030  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:33.304046  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:33.383979  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:33.384038  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:33.074393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:36.146280  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:35.015200  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:37.514925  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:35.929247  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:35.943624  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:35.943691  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:35.980943  800812 cri.go:89] found id: ""
	I1007 13:39:35.980973  800812 logs.go:282] 0 containers: []
	W1007 13:39:35.980988  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:35.980996  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:35.981068  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:36.021834  800812 cri.go:89] found id: ""
	I1007 13:39:36.021868  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.021876  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:36.021882  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:36.021939  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:36.056651  800812 cri.go:89] found id: ""
	I1007 13:39:36.056687  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.056698  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:36.056706  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:36.056781  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:36.095332  800812 cri.go:89] found id: ""
	I1007 13:39:36.095360  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.095369  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:36.095376  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:36.095433  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:36.141361  800812 cri.go:89] found id: ""
	I1007 13:39:36.141403  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.141416  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:36.141424  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:36.141485  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:36.179122  800812 cri.go:89] found id: ""
	I1007 13:39:36.179155  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.179165  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:36.179171  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:36.179226  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:36.212594  800812 cri.go:89] found id: ""
	I1007 13:39:36.212630  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.212642  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:36.212651  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:36.212723  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:36.253109  800812 cri.go:89] found id: ""
	I1007 13:39:36.253145  800812 logs.go:282] 0 containers: []
	W1007 13:39:36.253156  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:36.253169  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:36.253187  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:36.327696  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:36.327729  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:36.327747  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:36.404687  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:36.404735  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:36.444913  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:36.444955  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:36.497657  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:36.497711  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.013791  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:39.027274  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:39.027344  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:39.061214  800812 cri.go:89] found id: ""
	I1007 13:39:39.061246  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.061254  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:39.061262  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:39.061323  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:39.096245  800812 cri.go:89] found id: ""
	I1007 13:39:39.096277  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.096288  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:39.096296  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:39.096373  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:39.137152  800812 cri.go:89] found id: ""
	I1007 13:39:39.137192  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.137204  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:39.137212  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:39.137279  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:39.172052  800812 cri.go:89] found id: ""
	I1007 13:39:39.172085  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.172094  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:39.172100  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:39.172161  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:39.208796  800812 cri.go:89] found id: ""
	I1007 13:39:39.208832  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.208843  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:39.208852  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:39.208923  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:39.243568  800812 cri.go:89] found id: ""
	I1007 13:39:39.243598  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.243606  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:39.243613  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:39.243669  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:39.279168  800812 cri.go:89] found id: ""
	I1007 13:39:39.279201  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.279209  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:39.279216  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:39.279276  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:39.321347  800812 cri.go:89] found id: ""
	I1007 13:39:39.321373  800812 logs.go:282] 0 containers: []
	W1007 13:39:39.321382  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:39.321391  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:39.321405  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:39.373936  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:39.373986  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:39.388225  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:39.388258  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:39.462454  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:39.462482  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:39.462500  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:39.545876  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:39.545931  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:40.015715  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.514458  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:42.094078  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:42.107800  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:42.107869  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:42.143781  800812 cri.go:89] found id: ""
	I1007 13:39:42.143818  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.143829  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:42.143837  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:42.143913  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:42.186434  800812 cri.go:89] found id: ""
	I1007 13:39:42.186468  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.186479  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:42.186490  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:42.186562  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:42.221552  800812 cri.go:89] found id: ""
	I1007 13:39:42.221588  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.221599  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:42.221608  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:42.221682  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:42.255536  800812 cri.go:89] found id: ""
	I1007 13:39:42.255574  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.255586  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:42.255593  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:42.255662  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:42.290067  800812 cri.go:89] found id: ""
	I1007 13:39:42.290103  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.290114  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:42.290126  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:42.290197  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:42.326182  800812 cri.go:89] found id: ""
	I1007 13:39:42.326215  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.326225  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:42.326232  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:42.326287  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:42.360560  800812 cri.go:89] found id: ""
	I1007 13:39:42.360594  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.360606  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:42.360616  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:42.360683  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:42.396242  800812 cri.go:89] found id: ""
	I1007 13:39:42.396270  800812 logs.go:282] 0 containers: []
	W1007 13:39:42.396280  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:42.396291  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:42.396308  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.448101  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:42.448160  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:42.462617  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:42.462648  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:42.541262  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:42.541288  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:42.541306  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:42.617009  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:42.617052  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.157272  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:45.171699  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:45.171777  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:45.213274  800812 cri.go:89] found id: ""
	I1007 13:39:45.213311  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.213322  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:45.213331  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:45.213393  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:45.252304  800812 cri.go:89] found id: ""
	I1007 13:39:45.252339  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.252348  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:45.252355  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:45.252408  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:45.289702  800812 cri.go:89] found id: ""
	I1007 13:39:45.289739  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.289751  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:45.289758  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:45.289824  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:45.325776  800812 cri.go:89] found id: ""
	I1007 13:39:45.325815  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.325827  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:45.325836  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:45.325909  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:45.362636  800812 cri.go:89] found id: ""
	I1007 13:39:45.362672  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.362683  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:45.362692  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:45.362764  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:45.405058  800812 cri.go:89] found id: ""
	I1007 13:39:45.405090  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.405100  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:45.405108  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:45.405174  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:45.439752  800812 cri.go:89] found id: ""
	I1007 13:39:45.439783  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.439793  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:45.439802  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:45.439866  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:45.476336  800812 cri.go:89] found id: ""
	I1007 13:39:45.476369  800812 logs.go:282] 0 containers: []
	W1007 13:39:45.476377  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:45.476388  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:45.476402  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:45.489707  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:45.489739  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:45.564593  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:45.564626  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:45.564645  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:45.639136  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:45.639184  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:45.684415  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:45.684458  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:42.226242  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.298298  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:45.013741  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:47.014360  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:49.015110  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:48.245534  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:48.260357  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:39:48.260425  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:39:48.297561  800812 cri.go:89] found id: ""
	I1007 13:39:48.297591  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.297599  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:39:48.297606  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:39:48.297661  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:39:48.332654  800812 cri.go:89] found id: ""
	I1007 13:39:48.332694  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.332705  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:39:48.332715  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:39:48.332783  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:39:48.370775  800812 cri.go:89] found id: ""
	I1007 13:39:48.370818  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.370829  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:39:48.370837  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:39:48.370895  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:39:48.409282  800812 cri.go:89] found id: ""
	I1007 13:39:48.409318  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.409329  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:39:48.409338  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:39:48.409415  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:39:48.448602  800812 cri.go:89] found id: ""
	I1007 13:39:48.448634  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.448642  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:39:48.448648  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:39:48.448702  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:39:48.483527  800812 cri.go:89] found id: ""
	I1007 13:39:48.483556  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.483565  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:39:48.483572  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:39:48.483627  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:39:48.519600  800812 cri.go:89] found id: ""
	I1007 13:39:48.519636  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.519645  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:39:48.519657  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:39:48.519725  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:39:48.559446  800812 cri.go:89] found id: ""
	I1007 13:39:48.559481  800812 logs.go:282] 0 containers: []
	W1007 13:39:48.559493  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:39:48.559505  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:39:48.559523  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:39:48.575824  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:39:48.575879  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:39:48.660033  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:39:48.660067  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:39:48.660083  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:39:48.738011  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:39:48.738077  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:39:48.781399  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:39:48.781439  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:39:50.616036  800212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.41240969s)
	I1007 13:39:50.616124  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:50.638334  800212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:50.654214  800212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:50.672345  800212 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:50.672370  800212 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:50.672429  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:50.699073  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:50.699139  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:50.711774  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:50.737818  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:50.737885  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:50.749603  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.760893  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:50.760965  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:50.771572  800212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:50.781793  800212 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:50.781856  800212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:50.793541  800212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:50.851411  800212 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:39:50.851486  800212 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:50.967773  800212 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:50.967938  800212 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:50.968105  800212 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:39:50.976935  800212 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:51.378305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:50.979096  800212 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:50.979227  800212 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:50.979291  800212 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:50.979375  800212 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:50.979467  800212 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:50.979560  800212 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:50.979634  800212 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:50.979717  800212 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:50.979789  800212 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:50.979857  800212 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:50.979925  800212 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:50.979959  800212 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:50.980011  800212 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:51.280206  800212 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:51.430988  800212 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:39:51.677074  800212 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:51.867985  800212 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:52.283613  800212 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:52.284108  800212 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:52.288874  800212 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.333296  800812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:39:51.346939  800812 kubeadm.go:597] duration metric: took 4m4.08487661s to restartPrimaryControlPlane
	W1007 13:39:51.347039  800812 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:39:51.347070  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:39:51.822215  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:39:51.841443  800812 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:39:51.854663  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:39:51.868065  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:39:51.868079  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:39:51.868140  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:39:51.879052  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:39:51.879133  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:39:51.889979  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:39:51.901929  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:39:51.902007  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:39:51.912958  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.923420  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:39:51.923492  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:39:51.934307  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:39:51.944066  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:39:51.944138  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:39:51.954170  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:39:52.028915  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:39:52.028973  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:39:52.180138  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:39:52.180312  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:39:52.180457  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:39:52.377920  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:39:52.379989  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:39:52.380160  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:39:52.380267  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:39:52.380407  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:39:52.380499  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:39:52.380607  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:39:52.380700  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:39:52.381700  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:39:52.382420  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:39:52.383189  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:39:52.384091  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:39:52.384191  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:39:52.384372  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:39:52.769185  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:39:52.870841  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:39:52.958399  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:39:53.168169  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:39:53.192475  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:53.193447  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:53.193519  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:53.355310  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:39:51.514892  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.515960  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:53.358443  800812 out.go:235]   - Booting up control plane ...
	I1007 13:39:53.358593  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:53.365515  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:53.366449  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:53.367325  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:39:53.369598  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:39:54.454391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:52.290945  800212 out.go:235]   - Booting up control plane ...
	I1007 13:39:52.291058  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:39:52.291164  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:39:52.291610  800212 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:39:52.312059  800212 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:39:52.318321  800212 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:39:52.318412  800212 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:39:52.456671  800212 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:39:52.456802  800212 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:39:52.958340  800212 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.579104ms
	I1007 13:39:52.958484  800212 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:39:57.959379  800212 kubeadm.go:310] [api-check] The API server is healthy after 5.001260012s
	I1007 13:39:57.980499  800212 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:39:57.999006  800212 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:39:58.043754  800212 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:39:58.044050  800212 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-653322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:39:58.062167  800212 kubeadm.go:310] [bootstrap-token] Using token: 72a6vd.dmbcvepur9l2dhmv
	I1007 13:39:58.064163  800212 out.go:235]   - Configuring RBAC rules ...
	I1007 13:39:58.064326  800212 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:39:58.079082  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:39:58.094414  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:39:58.099862  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:39:58.109846  800212 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:39:58.122572  800212 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:39:58.370342  800212 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:39:58.808645  800212 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:39:59.367759  800212 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:39:59.368708  800212 kubeadm.go:310] 
	I1007 13:39:59.368834  800212 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:39:59.368859  800212 kubeadm.go:310] 
	I1007 13:39:59.368976  800212 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:39:59.368991  800212 kubeadm.go:310] 
	I1007 13:39:59.369031  800212 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:39:59.369111  800212 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:39:59.369155  800212 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:39:59.369162  800212 kubeadm.go:310] 
	I1007 13:39:59.369217  800212 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:39:59.369245  800212 kubeadm.go:310] 
	I1007 13:39:59.369317  800212 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:39:59.369329  800212 kubeadm.go:310] 
	I1007 13:39:59.369390  800212 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:39:59.369487  800212 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:39:59.369588  800212 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:39:59.369600  800212 kubeadm.go:310] 
	I1007 13:39:59.369722  800212 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:39:59.369826  800212 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:39:59.369838  800212 kubeadm.go:310] 
	I1007 13:39:59.369960  800212 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370113  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:39:59.370151  800212 kubeadm.go:310] 	--control-plane 
	I1007 13:39:59.370160  800212 kubeadm.go:310] 
	I1007 13:39:59.370302  800212 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:39:59.370331  800212 kubeadm.go:310] 
	I1007 13:39:59.370458  800212 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 72a6vd.dmbcvepur9l2dhmv \
	I1007 13:39:59.370592  800212 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:39:59.371701  800212 kubeadm.go:310] W1007 13:39:50.802353    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372082  800212 kubeadm.go:310] W1007 13:39:50.803265    2575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:39:59.372217  800212 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:39:59.372252  800212 cni.go:84] Creating CNI manager for ""
	I1007 13:39:59.372266  800212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:39:59.374383  800212 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:39:56.015201  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:39:58.517383  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:00.534326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:39:59.376063  800212 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:39:59.389097  800212 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:39:59.409782  800212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:39:59.409864  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:39:59.409859  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-653322 minikube.k8s.io/updated_at=2024_10_07T13_39_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=embed-certs-653322 minikube.k8s.io/primary=true
	I1007 13:39:59.451756  800212 ops.go:34] apiserver oom_adj: -16
	I1007 13:39:59.647019  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.147361  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:00.647505  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.147866  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:01.647444  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.147271  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:02.647066  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.147382  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.647825  800212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:40:03.796730  800212 kubeadm.go:1113] duration metric: took 4.386947643s to wait for elevateKubeSystemPrivileges
	I1007 13:40:03.796776  800212 kubeadm.go:394] duration metric: took 5m2.178460784s to StartCluster
	I1007 13:40:03.796802  800212 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.796927  800212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:40:03.800809  800212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:40:03.801152  800212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:40:03.801235  800212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:40:03.801341  800212 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-653322"
	I1007 13:40:03.801366  800212 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-653322"
	W1007 13:40:03.801374  800212 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:40:03.801380  800212 addons.go:69] Setting default-storageclass=true in profile "embed-certs-653322"
	I1007 13:40:03.801397  800212 addons.go:69] Setting metrics-server=true in profile "embed-certs-653322"
	I1007 13:40:03.801418  800212 config.go:182] Loaded profile config "embed-certs-653322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:40:03.801428  800212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-653322"
	I1007 13:40:03.801442  800212 addons.go:234] Setting addon metrics-server=true in "embed-certs-653322"
	W1007 13:40:03.801452  800212 addons.go:243] addon metrics-server should already be in state true
	I1007 13:40:03.801485  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801411  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.801854  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801895  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801901  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.801908  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.801937  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.802059  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.803364  800212 out.go:177] * Verifying Kubernetes components...
	I1007 13:40:03.805464  800212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:40:03.820021  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I1007 13:40:03.820297  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1007 13:40:03.820632  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.820812  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.821460  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821482  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.821598  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I1007 13:40:03.821627  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.821639  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.822131  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822377  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.822388  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.822769  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822823  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.822938  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.822990  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.823583  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.823609  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.824011  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.824209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.828672  800212 addons.go:234] Setting addon default-storageclass=true in "embed-certs-653322"
	W1007 13:40:03.828697  800212 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:40:03.828731  800212 host.go:66] Checking if "embed-certs-653322" exists ...
	I1007 13:40:03.829118  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.829169  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.839251  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1007 13:40:03.839800  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.840506  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.840533  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.840894  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.841130  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.842660  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I1007 13:40:03.843181  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.843235  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.843819  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.843841  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.844191  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.844469  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.845247  800212 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:40:03.846191  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.846688  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:40:03.846712  800212 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:40:03.846737  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.847801  800212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:40:01.015857  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.515782  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:03.849482  800212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:03.849504  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:40:03.849528  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.851923  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852765  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.852798  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.852987  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.853209  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.853367  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.853482  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.854532  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I1007 13:40:03.854540  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855100  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.855127  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.855438  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.855484  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.855836  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.856149  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.856179  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.856258  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.856436  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:03.856791  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.857523  800212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:40:03.857572  800212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:40:03.873780  800212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I1007 13:40:03.874162  800212 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:40:03.874943  800212 main.go:141] libmachine: Using API Version  1
	I1007 13:40:03.874958  800212 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:40:03.875358  800212 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:40:03.875581  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetState
	I1007 13:40:03.877658  800212 main.go:141] libmachine: (embed-certs-653322) Calling .DriverName
	I1007 13:40:03.877924  800212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:03.877940  800212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:40:03.877962  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHHostname
	I1007 13:40:03.881764  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882241  800212 main.go:141] libmachine: (embed-certs-653322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c2:87", ip: ""} in network mk-embed-certs-653322: {Iface:virbr4 ExpiryTime:2024-10-07 14:34:47 +0000 UTC Type:0 Mac:52:54:00:5f:c2:87 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:embed-certs-653322 Clientid:01:52:54:00:5f:c2:87}
	I1007 13:40:03.882272  800212 main.go:141] libmachine: (embed-certs-653322) DBG | domain embed-certs-653322 has defined IP address 192.168.50.36 and MAC address 52:54:00:5f:c2:87 in network mk-embed-certs-653322
	I1007 13:40:03.882619  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHPort
	I1007 13:40:03.882839  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHKeyPath
	I1007 13:40:03.882999  800212 main.go:141] libmachine: (embed-certs-653322) Calling .GetSSHUsername
	I1007 13:40:03.883146  800212 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/embed-certs-653322/id_rsa Username:docker}
	I1007 13:40:04.059493  800212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:40:04.092602  800212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135614  800212 node_ready.go:49] node "embed-certs-653322" has status "Ready":"True"
	I1007 13:40:04.135639  800212 node_ready.go:38] duration metric: took 42.999262ms for node "embed-certs-653322" to be "Ready" ...
	I1007 13:40:04.135649  800212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:04.168633  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:04.177323  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:40:04.206431  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:40:04.358331  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:40:04.358360  800212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:40:04.453932  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:40:04.453978  800212 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:40:04.543045  800212 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:04.543079  800212 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:40:04.628016  800212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:40:05.373199  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166722968s)
	I1007 13:40:05.373269  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373286  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373188  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.195822413s)
	I1007 13:40:05.373374  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373395  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373726  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373746  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373756  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373764  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.373772  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.373786  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.373798  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.373810  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.373819  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.374033  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374019  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374056  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.374077  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:05.374104  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.374123  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:05.449400  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:05.449435  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:05.449768  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:05.449785  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034194  800212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.406118465s)
	I1007 13:40:06.034270  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034292  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034583  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034603  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034613  800212 main.go:141] libmachine: Making call to close driver server
	I1007 13:40:06.034620  800212 main.go:141] libmachine: (embed-certs-653322) Calling .Close
	I1007 13:40:06.034852  800212 main.go:141] libmachine: (embed-certs-653322) DBG | Closing plugin on server side
	I1007 13:40:06.034920  800212 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:40:06.034951  800212 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:40:06.034967  800212 addons.go:475] Verifying addon metrics-server=true in "embed-certs-653322"
	I1007 13:40:06.036901  800212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:40:03.602357  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:06.038108  800212 addons.go:510] duration metric: took 2.236891318s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:40:06.178973  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:06.015270  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.514554  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:08.675453  800212 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:10.182593  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.182620  800212 pod_ready.go:82] duration metric: took 6.013956349s for pod "coredns-7c65d6cfc9-hrbbb" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.182630  800212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189183  800212 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.189216  800212 pod_ready.go:82] duration metric: took 6.578623ms for pod "coredns-7c65d6cfc9-l6vfj" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.189229  800212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195272  800212 pod_ready.go:93] pod "etcd-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.195298  800212 pod_ready.go:82] duration metric: took 6.06024ms for pod "etcd-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.195308  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203341  800212 pod_ready.go:93] pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.203365  800212 pod_ready.go:82] duration metric: took 8.050464ms for pod "kube-apiserver-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.203375  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209333  800212 pod_ready.go:93] pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.209364  800212 pod_ready.go:82] duration metric: took 5.980877ms for pod "kube-controller-manager-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.209377  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573541  800212 pod_ready.go:93] pod "kube-proxy-z9r92" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.573574  800212 pod_ready.go:82] duration metric: took 364.188673ms for pod "kube-proxy-z9r92" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.573586  800212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973294  800212 pod_ready.go:93] pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace has status "Ready":"True"
	I1007 13:40:10.973325  800212 pod_ready.go:82] duration metric: took 399.732244ms for pod "kube-scheduler-embed-certs-653322" in "kube-system" namespace to be "Ready" ...
	I1007 13:40:10.973334  800212 pod_ready.go:39] duration metric: took 6.837673484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:10.973354  800212 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:40:10.973424  800212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:40:10.989629  800212 api_server.go:72] duration metric: took 7.188432004s to wait for apiserver process to appear ...
	I1007 13:40:10.989661  800212 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:40:10.989690  800212 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I1007 13:40:10.994679  800212 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I1007 13:40:10.995855  800212 api_server.go:141] control plane version: v1.31.1
	I1007 13:40:10.995882  800212 api_server.go:131] duration metric: took 6.212413ms to wait for apiserver health ...
	I1007 13:40:10.995894  800212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:40:11.176174  800212 system_pods.go:59] 9 kube-system pods found
	I1007 13:40:11.176207  800212 system_pods.go:61] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.176213  800212 system_pods.go:61] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.176217  800212 system_pods.go:61] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.176221  800212 system_pods.go:61] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.176225  800212 system_pods.go:61] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.176228  800212 system_pods.go:61] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.176231  800212 system_pods.go:61] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.176236  800212 system_pods.go:61] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.176240  800212 system_pods.go:61] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.176251  800212 system_pods.go:74] duration metric: took 180.350309ms to wait for pod list to return data ...
	I1007 13:40:11.176258  800212 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:40:11.374362  800212 default_sa.go:45] found service account: "default"
	I1007 13:40:11.374397  800212 default_sa.go:55] duration metric: took 198.130993ms for default service account to be created ...
	I1007 13:40:11.374410  800212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:40:11.577087  800212 system_pods.go:86] 9 kube-system pods found
	I1007 13:40:11.577124  800212 system_pods.go:89] "coredns-7c65d6cfc9-hrbbb" [c5a49453-f8c8-44d1-bbca-2b7472bf504b] Running
	I1007 13:40:11.577130  800212 system_pods.go:89] "coredns-7c65d6cfc9-l6vfj" [fe2f90d1-9c6f-4ada-996d-fc63bb7baffe] Running
	I1007 13:40:11.577134  800212 system_pods.go:89] "etcd-embed-certs-653322" [93d90873-0499-40a5-9800-1eaa77ff3f26] Running
	I1007 13:40:11.577138  800212 system_pods.go:89] "kube-apiserver-embed-certs-653322" [08befe94-631a-4082-9e93-f17d70d93522] Running
	I1007 13:40:11.577141  800212 system_pods.go:89] "kube-controller-manager-embed-certs-653322" [b989c141-47a2-416c-bead-a5d557d6b216] Running
	I1007 13:40:11.577145  800212 system_pods.go:89] "kube-proxy-z9r92" [762b87c9-62ad-4bca-8135-77649d0a453a] Running
	I1007 13:40:11.577149  800212 system_pods.go:89] "kube-scheduler-embed-certs-653322" [1a5114db-4055-47ee-9ac5-9575e18d46c9] Running
	I1007 13:40:11.577157  800212 system_pods.go:89] "metrics-server-6867b74b74-xwpbg" [0f8c5895-ed84-4e2f-be7a-ed5858f47ce6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:40:11.577161  800212 system_pods.go:89] "storage-provisioner" [e0396d2d-9740-4e17-868b-041d948a6eff] Running
	I1007 13:40:11.577171  800212 system_pods.go:126] duration metric: took 202.754732ms to wait for k8s-apps to be running ...
	I1007 13:40:11.577179  800212 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:40:11.577228  800212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:40:11.595122  800212 system_svc.go:56] duration metric: took 17.926197ms WaitForService to wait for kubelet
	I1007 13:40:11.595159  800212 kubeadm.go:582] duration metric: took 7.793966621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:40:11.595189  800212 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:40:11.774788  800212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:40:11.774819  800212 node_conditions.go:123] node cpu capacity is 2
	I1007 13:40:11.774833  800212 node_conditions.go:105] duration metric: took 179.638486ms to run NodePressure ...
	I1007 13:40:11.774845  800212 start.go:241] waiting for startup goroutines ...
	I1007 13:40:11.774852  800212 start.go:246] waiting for cluster config update ...
	I1007 13:40:11.774862  800212 start.go:255] writing updated cluster config ...
	I1007 13:40:11.775199  800212 ssh_runner.go:195] Run: rm -f paused
	I1007 13:40:11.829109  800212 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:40:11.831389  800212 out.go:177] * Done! kubectl is now configured to use "embed-certs-653322" cluster and "default" namespace by default
	I1007 13:40:09.682305  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:11.014595  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:13.514109  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:12.754391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:16.015105  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.513935  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:18.834414  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.906376  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:21.015129  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:23.518245  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:26.014981  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:28.513904  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:27.986365  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.058375  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:31.015269  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.514729  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:33.370670  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:40:33.371065  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:33.371255  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:36.013424  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.014881  800087 pod_ready.go:103] pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace has status "Ready":"False"
	I1007 13:40:38.507584  800087 pod_ready.go:82] duration metric: took 4m0.000325195s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" ...
	E1007 13:40:38.507633  800087 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zsm9l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:40:38.507657  800087 pod_ready.go:39] duration metric: took 4m14.542185527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:40:38.507694  800087 kubeadm.go:597] duration metric: took 4m21.215120888s to restartPrimaryControlPlane
	W1007 13:40:38.507784  800087 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:40:38.507824  800087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:40:38.371494  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:38.371681  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:37.138368  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:40.210391  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:46.290312  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:48.371961  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:40:48.372225  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:40:49.362313  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:55.442333  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:40:58.514279  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:04.757708  800087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.249856079s)
	I1007 13:41:04.757796  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:04.787393  800087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:41:04.805311  800087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:04.819815  800087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:04.819839  800087 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:04.819889  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:04.832607  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:04.832673  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:04.847624  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:04.859808  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:04.859890  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:04.886041  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.896677  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:04.896746  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:04.906688  800087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:04.915884  800087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:04.915965  800087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:04.925767  800087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:04.981704  800087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:41:04.981799  800087 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:05.104530  800087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:05.104648  800087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:05.104750  800087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:41:05.114782  800087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:05.116948  800087 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:05.117074  800087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:05.117168  800087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:05.117275  800087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:05.117358  800087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:05.117447  800087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:05.117522  800087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:05.117620  800087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:05.117733  800087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:05.117851  800087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:05.117961  800087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:05.118055  800087 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:05.118147  800087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:05.216990  800087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:05.548814  800087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:41:05.921322  800087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:06.206950  800087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:06.412087  800087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:06.412698  800087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:06.415768  800087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:04.598286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:06.418055  800087 out.go:235]   - Booting up control plane ...
	I1007 13:41:06.418195  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:06.419324  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:06.420095  800087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:06.437974  800087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:06.447497  800087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:06.447580  800087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:06.582080  800087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:41:06.582223  800087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:41:07.583021  800087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001204833s
	I1007 13:41:07.583165  800087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:41:08.372715  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:08.372913  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:07.666427  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:13.085728  800087 kubeadm.go:310] [api-check] The API server is healthy after 5.502732546s
	I1007 13:41:13.105047  800087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:41:13.122083  800087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:41:13.157464  800087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:41:13.157751  800087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-016701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:41:13.176062  800087 kubeadm.go:310] [bootstrap-token] Using token: ott6bx.mfcul37ilsfpftrr
	I1007 13:41:13.177574  800087 out.go:235]   - Configuring RBAC rules ...
	I1007 13:41:13.177739  800087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:41:13.184629  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:41:13.200989  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:41:13.206521  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:41:13.212338  800087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:41:13.217063  800087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:41:13.493012  800087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:41:13.926154  800087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:41:14.500818  800087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:41:14.500844  800087 kubeadm.go:310] 
	I1007 13:41:14.500894  800087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:41:14.500899  800087 kubeadm.go:310] 
	I1007 13:41:14.500988  800087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:41:14.501001  800087 kubeadm.go:310] 
	I1007 13:41:14.501041  800087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:41:14.501095  800087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:41:14.501196  800087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:41:14.501223  800087 kubeadm.go:310] 
	I1007 13:41:14.501307  800087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:41:14.501316  800087 kubeadm.go:310] 
	I1007 13:41:14.501379  800087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:41:14.501448  800087 kubeadm.go:310] 
	I1007 13:41:14.501533  800087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:41:14.501629  800087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:41:14.501733  800087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:41:14.501750  800087 kubeadm.go:310] 
	I1007 13:41:14.501854  800087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:41:14.501964  800087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:41:14.501973  800087 kubeadm.go:310] 
	I1007 13:41:14.502109  800087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502269  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:41:14.502311  800087 kubeadm.go:310] 	--control-plane 
	I1007 13:41:14.502322  800087 kubeadm.go:310] 
	I1007 13:41:14.502443  800087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:41:14.502453  800087 kubeadm.go:310] 
	I1007 13:41:14.502600  800087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ott6bx.mfcul37ilsfpftrr \
	I1007 13:41:14.502755  800087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:41:14.503943  800087 kubeadm.go:310] W1007 13:41:04.948448    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504331  800087 kubeadm.go:310] W1007 13:41:04.949311    2978 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:41:14.504448  800087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:14.504466  800087 cni.go:84] Creating CNI manager for ""
	I1007 13:41:14.504474  800087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:41:14.506680  800087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:41:14.508369  800087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:41:14.520414  800087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:41:14.544669  800087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:41:14.544833  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:14.544851  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-016701 minikube.k8s.io/updated_at=2024_10_07T13_41_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=no-preload-016701 minikube.k8s.io/primary=true
	I1007 13:41:14.772594  800087 ops.go:34] apiserver oom_adj: -16
	I1007 13:41:14.772619  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:13.746372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:16.822393  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:15.273211  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:15.772786  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.273580  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:16.773395  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.272868  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:17.773484  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.273717  800087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:41:18.405010  800087 kubeadm.go:1113] duration metric: took 3.86025273s to wait for elevateKubeSystemPrivileges
	I1007 13:41:18.405055  800087 kubeadm.go:394] duration metric: took 5m1.164485599s to StartCluster
	I1007 13:41:18.405081  800087 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.405182  800087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:41:18.406935  800087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:41:18.407244  800087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.197 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:41:18.407398  800087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:41:18.407513  800087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-016701"
	I1007 13:41:18.407539  800087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-016701"
	W1007 13:41:18.407549  800087 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:41:18.407548  800087 addons.go:69] Setting default-storageclass=true in profile "no-preload-016701"
	I1007 13:41:18.407572  800087 addons.go:69] Setting metrics-server=true in profile "no-preload-016701"
	I1007 13:41:18.407615  800087 addons.go:234] Setting addon metrics-server=true in "no-preload-016701"
	W1007 13:41:18.407721  800087 addons.go:243] addon metrics-server should already be in state true
	I1007 13:41:18.407850  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407591  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.407545  800087 config.go:182] Loaded profile config "no-preload-016701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:41:18.407594  800087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-016701"
	I1007 13:41:18.408374  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408387  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408417  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.408424  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408447  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.408542  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.409406  800087 out.go:177] * Verifying Kubernetes components...
	I1007 13:41:18.411018  800087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:41:18.425614  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I1007 13:41:18.426275  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.426764  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I1007 13:41:18.426926  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.426956  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427308  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.427410  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.427840  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.427862  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.427976  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.428024  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.428257  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.428470  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.428478  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I1007 13:41:18.428980  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.429578  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.429605  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.429927  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.430564  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.430602  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.431895  800087 addons.go:234] Setting addon default-storageclass=true in "no-preload-016701"
	W1007 13:41:18.431918  800087 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:41:18.431952  800087 host.go:66] Checking if "no-preload-016701" exists ...
	I1007 13:41:18.432279  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.432319  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.445003  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1007 13:41:18.445514  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.445968  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1007 13:41:18.446101  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.446125  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.446534  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.446580  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.446821  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.447159  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.447187  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.447559  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.447754  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.449595  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.450543  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.452177  800087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:41:18.452788  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I1007 13:41:18.453311  800087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:41:18.453332  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.454421  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.454443  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.454767  800087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.454791  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:41:18.454813  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.454902  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.455260  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:41:18.455277  800087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:41:18.455293  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.455514  800087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:41:18.455574  800087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:41:18.458904  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459133  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459321  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459529  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459681  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.459699  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.459704  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.459849  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.459962  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.459994  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.460161  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.460349  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.460480  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.495484  800087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1007 13:41:18.496027  800087 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:41:18.496790  800087 main.go:141] libmachine: Using API Version  1
	I1007 13:41:18.496828  800087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:41:18.497324  800087 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:41:18.497537  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetState
	I1007 13:41:18.499178  800087 main.go:141] libmachine: (no-preload-016701) Calling .DriverName
	I1007 13:41:18.499425  800087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.499440  800087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:41:18.499457  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHHostname
	I1007 13:41:18.502808  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503337  800087 main.go:141] libmachine: (no-preload-016701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1e:55", ip: ""} in network mk-no-preload-016701: {Iface:virbr3 ExpiryTime:2024-10-07 14:35:49 +0000 UTC Type:0 Mac:52:54:00:d2:1e:55 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:no-preload-016701 Clientid:01:52:54:00:d2:1e:55}
	I1007 13:41:18.503363  800087 main.go:141] libmachine: (no-preload-016701) DBG | domain no-preload-016701 has defined IP address 192.168.39.197 and MAC address 52:54:00:d2:1e:55 in network mk-no-preload-016701
	I1007 13:41:18.503573  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHPort
	I1007 13:41:18.503796  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHKeyPath
	I1007 13:41:18.503972  800087 main.go:141] libmachine: (no-preload-016701) Calling .GetSSHUsername
	I1007 13:41:18.504135  800087 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/no-preload-016701/id_rsa Username:docker}
	I1007 13:41:18.607501  800087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:41:18.631538  800087 node_ready.go:35] waiting up to 6m0s for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645041  800087 node_ready.go:49] node "no-preload-016701" has status "Ready":"True"
	I1007 13:41:18.645065  800087 node_ready.go:38] duration metric: took 13.492405ms for node "no-preload-016701" to be "Ready" ...
	I1007 13:41:18.645076  800087 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:18.651831  800087 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:18.689502  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:41:18.714363  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:41:18.714386  800087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:41:18.738095  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:41:18.794344  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:41:18.794384  800087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:41:18.906848  800087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:18.906886  800087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:41:18.991553  800087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:41:19.434333  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434360  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434687  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.434701  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434710  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434716  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.434932  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.434987  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435004  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.435015  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.434993  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435269  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.435274  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:19.435282  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.435290  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.435297  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.436889  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.436909  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.456678  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:19.456714  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:19.457112  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:19.457133  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:19.457164  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.382548  800087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.390945966s)
	I1007 13:41:20.382614  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.382628  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.382952  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383052  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383068  800087 main.go:141] libmachine: Making call to close driver server
	I1007 13:41:20.383077  800087 main.go:141] libmachine: (no-preload-016701) Calling .Close
	I1007 13:41:20.383010  800087 main.go:141] libmachine: (no-preload-016701) DBG | Closing plugin on server side
	I1007 13:41:20.383354  800087 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:41:20.383370  800087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:41:20.383384  800087 addons.go:475] Verifying addon metrics-server=true in "no-preload-016701"
	I1007 13:41:20.385366  800087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 13:41:20.386603  800087 addons.go:510] duration metric: took 1.979211294s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 13:41:20.665725  800087 pod_ready.go:103] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"False"
	I1007 13:41:22.158063  800087 pod_ready.go:93] pod "etcd-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:22.158090  800087 pod_ready.go:82] duration metric: took 3.506228901s for pod "etcd-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:22.158100  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165304  800087 pod_ready.go:93] pod "kube-apiserver-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.165330  800087 pod_ready.go:82] duration metric: took 2.007223213s for pod "kube-apiserver-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.165340  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172907  800087 pod_ready.go:93] pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.172930  800087 pod_ready.go:82] duration metric: took 7.583143ms for pod "kube-controller-manager-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.172939  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180216  800087 pod_ready.go:93] pod "kube-proxy-bjqg2" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.180243  800087 pod_ready.go:82] duration metric: took 7.297732ms for pod "kube-proxy-bjqg2" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.180255  800087 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185080  800087 pod_ready.go:93] pod "kube-scheduler-no-preload-016701" in "kube-system" namespace has status "Ready":"True"
	I1007 13:41:24.185108  800087 pod_ready.go:82] duration metric: took 4.84549ms for pod "kube-scheduler-no-preload-016701" in "kube-system" namespace to be "Ready" ...
	I1007 13:41:24.185119  800087 pod_ready.go:39] duration metric: took 5.540032302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:41:24.185141  800087 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:41:24.185197  800087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:41:24.201360  800087 api_server.go:72] duration metric: took 5.794073168s to wait for apiserver process to appear ...
	I1007 13:41:24.201464  800087 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:41:24.201496  800087 api_server.go:253] Checking apiserver healthz at https://192.168.39.197:8443/healthz ...
	I1007 13:41:24.207141  800087 api_server.go:279] https://192.168.39.197:8443/healthz returned 200:
	ok
	I1007 13:41:24.208456  800087 api_server.go:141] control plane version: v1.31.1
	I1007 13:41:24.208481  800087 api_server.go:131] duration metric: took 7.007742ms to wait for apiserver health ...
	I1007 13:41:24.208491  800087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:41:24.213660  800087 system_pods.go:59] 9 kube-system pods found
	I1007 13:41:24.213693  800087 system_pods.go:61] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213701  800087 system_pods.go:61] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.213711  800087 system_pods.go:61] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.213716  800087 system_pods.go:61] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.213719  800087 system_pods.go:61] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.213722  800087 system_pods.go:61] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.213725  800087 system_pods.go:61] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.213730  800087 system_pods.go:61] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.213734  800087 system_pods.go:61] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.213742  800087 system_pods.go:74] duration metric: took 5.244677ms to wait for pod list to return data ...
	I1007 13:41:24.213749  800087 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:41:24.216891  800087 default_sa.go:45] found service account: "default"
	I1007 13:41:24.216923  800087 default_sa.go:55] duration metric: took 3.165762ms for default service account to be created ...
	I1007 13:41:24.216936  800087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:41:24.366926  800087 system_pods.go:86] 9 kube-system pods found
	I1007 13:41:24.366962  800087 system_pods.go:89] "coredns-7c65d6cfc9-pdnlq" [438ffc56-51bd-4100-9d6d-50b06b6bc159] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366970  800087 system_pods.go:89] "coredns-7c65d6cfc9-qq4hc" [5d780dda-6153-47aa-95b0-88f5674dabf9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:41:24.366977  800087 system_pods.go:89] "etcd-no-preload-016701" [0f5411ff-86e2-476d-9e83-2bbc7121e927] Running
	I1007 13:41:24.366982  800087 system_pods.go:89] "kube-apiserver-no-preload-016701" [1204535d-a0ab-463f-9de5-d286b14931e8] Running
	I1007 13:41:24.366986  800087 system_pods.go:89] "kube-controller-manager-no-preload-016701" [6ac9a9cd-dce8-4bba-8e13-dfe9e2513ad6] Running
	I1007 13:41:24.366990  800087 system_pods.go:89] "kube-proxy-bjqg2" [ba601e18-7fb7-4ad6-84ad-7480846bf394] Running
	I1007 13:41:24.366993  800087 system_pods.go:89] "kube-scheduler-no-preload-016701" [dfb9d54b-10a8-4c91-b987-aecb1a972dd6] Running
	I1007 13:41:24.366998  800087 system_pods.go:89] "metrics-server-6867b74b74-s7qkh" [421db538-caa5-46ae-85bb-7c70aea43877] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:41:24.367001  800087 system_pods.go:89] "storage-provisioner" [18d1068f-0542-4c9d-a6d0-75fcca08cf58] Running
	I1007 13:41:24.367011  800087 system_pods.go:126] duration metric: took 150.068129ms to wait for k8s-apps to be running ...
	I1007 13:41:24.367018  800087 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:41:24.367064  800087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:24.383197  800087 system_svc.go:56] duration metric: took 16.165166ms WaitForService to wait for kubelet
	I1007 13:41:24.383232  800087 kubeadm.go:582] duration metric: took 5.975954284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:41:24.383256  800087 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:41:24.563433  800087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:41:24.563469  800087 node_conditions.go:123] node cpu capacity is 2
	I1007 13:41:24.563486  800087 node_conditions.go:105] duration metric: took 180.224622ms to run NodePressure ...
	I1007 13:41:24.563503  800087 start.go:241] waiting for startup goroutines ...
	I1007 13:41:24.563514  800087 start.go:246] waiting for cluster config update ...
	I1007 13:41:24.563529  800087 start.go:255] writing updated cluster config ...
	I1007 13:41:24.563898  800087 ssh_runner.go:195] Run: rm -f paused
	I1007 13:41:24.619289  800087 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:41:24.621527  800087 out.go:177] * Done! kubectl is now configured to use "no-preload-016701" cluster and "default" namespace by default
	I1007 13:41:22.898326  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:25.970388  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:32.050353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:35.122329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:41.202320  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:44.274335  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:48.374723  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:41:48.375006  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:41:48.375034  800812 kubeadm.go:310] 
	I1007 13:41:48.375075  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:41:48.375132  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:41:48.375140  800812 kubeadm.go:310] 
	I1007 13:41:48.375183  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:41:48.375231  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:41:48.375369  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:41:48.375392  800812 kubeadm.go:310] 
	I1007 13:41:48.375514  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:41:48.375568  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:41:48.375617  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:41:48.375626  800812 kubeadm.go:310] 
	I1007 13:41:48.375747  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:41:48.375877  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:41:48.375895  800812 kubeadm.go:310] 
	I1007 13:41:48.376053  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:41:48.376140  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:41:48.376211  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:41:48.376288  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:41:48.376302  800812 kubeadm.go:310] 
	I1007 13:41:48.376705  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:41:48.376830  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:41:48.376948  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:41:48.377115  800812 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:41:48.377169  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:41:48.848117  800812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:41:48.863751  800812 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:41:48.874610  800812 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:41:48.874642  800812 kubeadm.go:157] found existing configuration files:
	
	I1007 13:41:48.874715  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:41:48.886201  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:41:48.886279  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:41:48.897494  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:41:48.908398  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:41:48.908481  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:41:48.921409  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.931814  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:41:48.931882  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:41:48.943484  800812 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:41:48.955060  800812 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:41:48.955245  800812 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:41:48.966391  800812 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:41:49.042441  800812 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:41:49.042521  800812 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:41:49.203488  800812 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:41:49.203603  800812 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:41:49.203715  800812 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:41:49.410381  800812 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:41:49.412411  800812 out.go:235]   - Generating certificates and keys ...
	I1007 13:41:49.412520  800812 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:41:49.412591  800812 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:41:49.412694  800812 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:41:49.412816  800812 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:41:49.412940  800812 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:41:49.412999  800812 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:41:49.413053  800812 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:41:49.413105  800812 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:41:49.413196  800812 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:41:49.413283  800812 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:41:49.413319  800812 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:41:49.413396  800812 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:41:49.634922  800812 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:41:49.724221  800812 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:41:49.804768  800812 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:41:49.980061  800812 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:41:50.000515  800812 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:41:50.000858  800812 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:41:50.001053  800812 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:41:50.163951  800812 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:41:50.166163  800812 out.go:235]   - Booting up control plane ...
	I1007 13:41:50.166331  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:41:50.180837  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:41:50.181963  800812 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:41:50.184140  800812 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:41:50.190548  800812 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:41:50.354360  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:53.426359  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:41:59.510321  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:02.578322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:08.658292  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:11.730352  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:17.810322  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:20.882397  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:26.962343  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:30.192477  800812 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:42:30.192790  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:30.193025  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:30.034345  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:35.193544  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:35.193820  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:36.114353  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:39.186453  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:45.194245  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:42:45.194449  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:42:45.266293  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:48.338329  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:54.418332  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:42:57.490294  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:05.194833  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:05.195103  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:03.570372  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:06.642286  802960 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.101:22: connect: no route to host
	I1007 13:43:09.643253  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:09.643290  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643598  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:09.643627  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:09.643837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:09.645347  802960 machine.go:96] duration metric: took 4m37.397836997s to provisionDockerMachine
	I1007 13:43:09.645389  802960 fix.go:56] duration metric: took 4m37.421085967s for fixHost
	I1007 13:43:09.645394  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 4m37.421104002s
	W1007 13:43:09.645409  802960 start.go:714] error starting host: provision: host is not running
	W1007 13:43:09.645530  802960 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 13:43:09.645542  802960 start.go:729] Will try again in 5 seconds ...
	I1007 13:43:14.646206  802960 start.go:360] acquireMachinesLock for default-k8s-diff-port-489319: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:43:14.646330  802960 start.go:364] duration metric: took 74.211µs to acquireMachinesLock for "default-k8s-diff-port-489319"
	I1007 13:43:14.646374  802960 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:43:14.646382  802960 fix.go:54] fixHost starting: 
	I1007 13:43:14.646717  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:43:14.646746  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:43:14.662426  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I1007 13:43:14.663016  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:43:14.663790  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:43:14.663822  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:43:14.664176  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:43:14.664429  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:14.664605  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:43:14.666440  802960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-489319: state=Stopped err=<nil>
	I1007 13:43:14.666467  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	W1007 13:43:14.666648  802960 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:43:14.668507  802960 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-489319" ...
	I1007 13:43:14.669973  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Start
	I1007 13:43:14.670294  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring networks are active...
	I1007 13:43:14.671299  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network default is active
	I1007 13:43:14.671623  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Ensuring network mk-default-k8s-diff-port-489319 is active
	I1007 13:43:14.672332  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Getting domain xml...
	I1007 13:43:14.673106  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Creating domain...
	I1007 13:43:15.035227  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting to get IP...
	I1007 13:43:15.036226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036673  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.036768  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.036657  804186 retry.go:31] will retry after 204.852009ms: waiting for machine to come up
	I1007 13:43:15.243827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244610  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.244699  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.244581  804186 retry.go:31] will retry after 334.887784ms: waiting for machine to come up
	I1007 13:43:15.581226  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581717  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.581747  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.581665  804186 retry.go:31] will retry after 354.992125ms: waiting for machine to come up
	I1007 13:43:15.938078  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938577  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:15.938614  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:15.938518  804186 retry.go:31] will retry after 592.784389ms: waiting for machine to come up
	I1007 13:43:16.533531  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534103  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:16.534128  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:16.534054  804186 retry.go:31] will retry after 756.034822ms: waiting for machine to come up
	I1007 13:43:17.291995  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292785  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:17.292807  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:17.292736  804186 retry.go:31] will retry after 896.816081ms: waiting for machine to come up
	I1007 13:43:18.191016  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191527  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:18.191560  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:18.191466  804186 retry.go:31] will retry after 1.08609499s: waiting for machine to come up
	I1007 13:43:19.280109  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280537  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:19.280576  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:19.280520  804186 retry.go:31] will retry after 1.392221474s: waiting for machine to come up
	I1007 13:43:20.674622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675071  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:20.675115  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:20.675031  804186 retry.go:31] will retry after 1.78021676s: waiting for machine to come up
	I1007 13:43:22.457647  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:22.458248  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:22.458160  804186 retry.go:31] will retry after 2.117086662s: waiting for machine to come up
	I1007 13:43:24.576838  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577415  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:24.577445  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:24.577364  804186 retry.go:31] will retry after 2.850833043s: waiting for machine to come up
	I1007 13:43:27.432222  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432855  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | unable to find current IP address of domain default-k8s-diff-port-489319 in network mk-default-k8s-diff-port-489319
	I1007 13:43:27.432882  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | I1007 13:43:27.432789  804186 retry.go:31] will retry after 3.63047619s: waiting for machine to come up
	I1007 13:43:31.065089  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.065729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Found IP for machine: 192.168.61.101
	I1007 13:43:31.065759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserving static IP address...
	I1007 13:43:31.065782  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has current primary IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.066317  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.066362  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Reserved static IP address: 192.168.61.101
	I1007 13:43:31.066395  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | skip adding static IP to network mk-default-k8s-diff-port-489319 - found existing host DHCP lease matching {name: "default-k8s-diff-port-489319", mac: "52:54:00:5a:71:ec", ip: "192.168.61.101"}
	I1007 13:43:31.066407  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Waiting for SSH to be available...
	I1007 13:43:31.066449  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Getting to WaitForSSH function...
	I1007 13:43:31.068871  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069233  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.069265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.069368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH client type: external
	I1007 13:43:31.069398  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa (-rw-------)
	I1007 13:43:31.069451  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:43:31.069466  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | About to run SSH command:
	I1007 13:43:31.069475  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | exit 0
	I1007 13:43:31.194580  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | SSH cmd err, output: <nil>: 
	I1007 13:43:31.195021  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetConfigRaw
	I1007 13:43:31.195801  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.198966  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199324  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.199359  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.199635  802960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/config.json ...
	I1007 13:43:31.199893  802960 machine.go:93] provisionDockerMachine start ...
	I1007 13:43:31.199919  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:31.200168  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.202444  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202817  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.202849  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.202989  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.203185  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203352  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.203515  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.203683  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.203930  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.203943  802960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:43:31.307182  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 13:43:31.307224  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307497  802960 buildroot.go:166] provisioning hostname "default-k8s-diff-port-489319"
	I1007 13:43:31.307525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.307722  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.310462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.310835  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.310905  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.311014  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.311192  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311437  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.311613  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.311794  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.311969  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.311981  802960 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-489319 && echo "default-k8s-diff-port-489319" | sudo tee /etc/hostname
	I1007 13:43:31.436251  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-489319
	
	I1007 13:43:31.436288  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.439927  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440241  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.440276  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.440616  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.440887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441042  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.441197  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.441360  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:31.441584  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:31.441612  802960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-489319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-489319/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-489319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:43:31.552909  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:43:31.552947  802960 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:43:31.552983  802960 buildroot.go:174] setting up certificates
	I1007 13:43:31.553002  802960 provision.go:84] configureAuth start
	I1007 13:43:31.553012  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetMachineName
	I1007 13:43:31.553454  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:31.556642  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557015  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.557055  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.557256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.559909  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560460  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.560487  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.560719  802960 provision.go:143] copyHostCerts
	I1007 13:43:31.560792  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:43:31.560812  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:43:31.560889  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:43:31.561045  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:43:31.561058  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:43:31.561084  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:43:31.561171  802960 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:43:31.561180  802960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:43:31.561208  802960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:43:31.561271  802960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-489319 san=[127.0.0.1 192.168.61.101 default-k8s-diff-port-489319 localhost minikube]
	I1007 13:43:31.871377  802960 provision.go:177] copyRemoteCerts
	I1007 13:43:31.871459  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:43:31.871489  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:31.874464  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.874887  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:31.874925  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:31.875112  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:31.875368  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:31.875547  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:31.875675  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:31.957423  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:43:31.988554  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1007 13:43:32.018470  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:43:32.046799  802960 provision.go:87] duration metric: took 493.782862ms to configureAuth
	I1007 13:43:32.046830  802960 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:43:32.047021  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:43:32.047151  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.050313  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.050727  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.050760  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.051011  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.051216  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051385  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.051522  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.051685  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.051878  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.051893  802960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:43:32.291927  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:43:32.291957  802960 machine.go:96] duration metric: took 1.092049658s to provisionDockerMachine
	I1007 13:43:32.291970  802960 start.go:293] postStartSetup for "default-k8s-diff-port-489319" (driver="kvm2")
	I1007 13:43:32.291985  802960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:43:32.292025  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.292491  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:43:32.292523  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.296195  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296625  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.296660  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.296889  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.297104  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.297300  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.297479  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.377749  802960 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:43:32.382419  802960 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:43:32.382459  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:43:32.382557  802960 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:43:32.382663  802960 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:43:32.382767  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:43:32.394059  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:32.422256  802960 start.go:296] duration metric: took 130.264438ms for postStartSetup
	I1007 13:43:32.422310  802960 fix.go:56] duration metric: took 17.775926417s for fixHost
	I1007 13:43:32.422340  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.425739  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.426254  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.426473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.426678  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426827  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.426941  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.427080  802960 main.go:141] libmachine: Using SSH client type: native
	I1007 13:43:32.427294  802960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.101 22 <nil> <nil>}
	I1007 13:43:32.427305  802960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:43:32.531411  802960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728308612.494637714
	
	I1007 13:43:32.531442  802960 fix.go:216] guest clock: 1728308612.494637714
	I1007 13:43:32.531450  802960 fix.go:229] Guest: 2024-10-07 13:43:32.494637714 +0000 UTC Remote: 2024-10-07 13:43:32.422315329 +0000 UTC m=+300.358475670 (delta=72.322385ms)
	I1007 13:43:32.531474  802960 fix.go:200] guest clock delta is within tolerance: 72.322385ms
	I1007 13:43:32.531480  802960 start.go:83] releasing machines lock for "default-k8s-diff-port-489319", held for 17.885135029s
	I1007 13:43:32.531503  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.531787  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:32.534783  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535215  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.535265  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.535472  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536178  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536404  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:43:32.536518  802960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:43:32.536581  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.536697  802960 ssh_runner.go:195] Run: cat /version.json
	I1007 13:43:32.536729  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:43:32.539709  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.539743  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540166  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540202  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:32.540256  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540348  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:32.540417  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540598  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540638  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:43:32.540762  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.540777  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:43:32.540884  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.540947  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:43:32.541089  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:43:32.642238  802960 ssh_runner.go:195] Run: systemctl --version
	I1007 13:43:32.649391  802960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:43:32.799266  802960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:43:32.805598  802960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:43:32.805707  802960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:43:32.823518  802960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:43:32.823560  802960 start.go:495] detecting cgroup driver to use...
	I1007 13:43:32.823651  802960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:43:32.842054  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:43:32.858474  802960 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:43:32.858550  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:43:32.873750  802960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:43:32.889165  802960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:43:33.019729  802960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:43:33.182269  802960 docker.go:233] disabling docker service ...
	I1007 13:43:33.182371  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:43:33.198610  802960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:43:33.213911  802960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:43:33.343594  802960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:43:33.476026  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:43:33.493130  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:43:33.513584  802960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:43:33.513652  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.525714  802960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:43:33.525816  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.538658  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.551146  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.564914  802960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:43:33.578180  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.590140  802960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.610967  802960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:43:33.624890  802960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:43:33.636736  802960 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:43:33.636825  802960 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:43:33.652573  802960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:43:33.665083  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:33.800780  802960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:43:33.898225  802960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:43:33.898309  802960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:43:33.903209  802960 start.go:563] Will wait 60s for crictl version
	I1007 13:43:33.903269  802960 ssh_runner.go:195] Run: which crictl
	I1007 13:43:33.907326  802960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:43:33.959008  802960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:43:33.959168  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:33.990929  802960 ssh_runner.go:195] Run: crio --version
	I1007 13:43:34.023756  802960 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:43:34.025496  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetIP
	I1007 13:43:34.028784  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029327  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:43:34.029360  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:43:34.029672  802960 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1007 13:43:34.034690  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:34.048101  802960 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:43:34.048259  802960 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:43:34.048325  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:34.086926  802960 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:43:34.087050  802960 ssh_runner.go:195] Run: which lz4
	I1007 13:43:34.091973  802960 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:43:34.096623  802960 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:43:34.096671  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:43:35.604800  802960 crio.go:462] duration metric: took 1.512877493s to copy over tarball
	I1007 13:43:35.604892  802960 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:43:37.805292  802960 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200363211s)
	I1007 13:43:37.805327  802960 crio.go:469] duration metric: took 2.200488229s to extract the tarball
	I1007 13:43:37.805338  802960 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:43:37.845477  802960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:43:37.895532  802960 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:43:37.895562  802960 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:43:37.895574  802960 kubeadm.go:934] updating node { 192.168.61.101 8444 v1.31.1 crio true true} ...
	I1007 13:43:37.895725  802960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-489319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:43:37.895804  802960 ssh_runner.go:195] Run: crio config
	I1007 13:43:37.949367  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:37.949395  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:37.949410  802960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:43:37.949433  802960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.101 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-489319 NodeName:default-k8s-diff-port-489319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:43:37.949576  802960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.101
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-489319"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:43:37.949659  802960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:43:37.959941  802960 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:43:37.960076  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:43:37.970766  802960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1007 13:43:37.989311  802960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:43:38.009634  802960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1007 13:43:38.027642  802960 ssh_runner.go:195] Run: grep 192.168.61.101	control-plane.minikube.internal$ /etc/hosts
	I1007 13:43:38.031764  802960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:43:38.044131  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:43:38.185253  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:43:38.212538  802960 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319 for IP: 192.168.61.101
	I1007 13:43:38.212565  802960 certs.go:194] generating shared ca certs ...
	I1007 13:43:38.212589  802960 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:43:38.212799  802960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:43:38.212859  802960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:43:38.212873  802960 certs.go:256] generating profile certs ...
	I1007 13:43:38.212997  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/client.key
	I1007 13:43:38.213082  802960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key.f1e25377
	I1007 13:43:38.213153  802960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key
	I1007 13:43:38.213325  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:43:38.213365  802960 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:43:38.213390  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:43:38.213425  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:43:38.213471  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:43:38.213501  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:43:38.213559  802960 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:43:38.214588  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:43:38.266516  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:43:38.305985  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:43:38.353490  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:43:38.380638  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 13:43:38.424440  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:43:38.452428  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:43:38.480709  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/default-k8s-diff-port-489319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:43:38.509639  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:43:38.536940  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:43:38.564021  802960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:43:38.591067  802960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:43:38.609218  802960 ssh_runner.go:195] Run: openssl version
	I1007 13:43:38.616235  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:43:38.629007  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634324  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.634400  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:43:38.641330  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:43:38.654384  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:43:38.667134  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672330  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.672407  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:43:38.678719  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:43:38.690565  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:43:38.705158  802960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710787  802960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.710868  802960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:43:38.717093  802960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:43:38.729957  802960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:43:38.735559  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:43:38.742580  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:43:38.749684  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:43:38.756534  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:43:38.762897  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:43:38.770450  802960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:43:38.777701  802960 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-489319 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:default-k8s-diff-port-489319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:43:38.777813  802960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:43:38.777880  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.822678  802960 cri.go:89] found id: ""
	I1007 13:43:38.822746  802960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:43:38.833436  802960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:43:38.833463  802960 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:43:38.833516  802960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:43:38.844226  802960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:43:38.845383  802960 kubeconfig.go:125] found "default-k8s-diff-port-489319" server: "https://192.168.61.101:8444"
	I1007 13:43:38.848063  802960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:43:38.859087  802960 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.101
	I1007 13:43:38.859129  802960 kubeadm.go:1160] stopping kube-system containers ...
	I1007 13:43:38.859142  802960 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 13:43:38.859221  802960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:43:38.902955  802960 cri.go:89] found id: ""
	I1007 13:43:38.903054  802960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 13:43:38.920556  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:43:38.930998  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:43:38.931027  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:43:38.931095  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:43:38.940538  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:43:38.940608  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:43:38.951198  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:43:38.960653  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:43:38.960746  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:43:38.970800  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.981094  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:43:38.981176  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:43:38.991845  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:43:39.001966  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:43:39.002080  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:43:39.014014  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:43:39.026304  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:39.157169  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.098491  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.941274215s)
	I1007 13:43:41.098539  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.310925  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.402330  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:41.502763  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:43:41.502864  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.003197  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:45.194317  800812 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:43:45.194637  800812 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:43:45.194670  800812 kubeadm.go:310] 
	I1007 13:43:45.194721  800812 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:43:45.194779  800812 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:43:45.194789  800812 kubeadm.go:310] 
	I1007 13:43:45.194832  800812 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:43:45.194873  800812 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:43:45.195053  800812 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:43:45.195079  800812 kubeadm.go:310] 
	I1007 13:43:45.195219  800812 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:43:45.195259  800812 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:43:45.195300  800812 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:43:45.195309  800812 kubeadm.go:310] 
	I1007 13:43:45.195434  800812 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:43:45.195533  800812 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:43:45.195542  800812 kubeadm.go:310] 
	I1007 13:43:45.195691  800812 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:43:45.195814  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:43:45.195912  800812 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:43:45.196007  800812 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:43:45.196018  800812 kubeadm.go:310] 
	I1007 13:43:45.196865  800812 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:43:45.197021  800812 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:43:45.197130  800812 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:43:45.197242  800812 kubeadm.go:394] duration metric: took 7m57.99434545s to StartCluster
	I1007 13:43:45.197299  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:43:45.197368  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:43:45.245334  800812 cri.go:89] found id: ""
	I1007 13:43:45.245369  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.245380  800812 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:43:45.245390  800812 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:43:45.245464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:43:45.287324  800812 cri.go:89] found id: ""
	I1007 13:43:45.287363  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.287375  800812 logs.go:284] No container was found matching "etcd"
	I1007 13:43:45.287384  800812 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:43:45.287464  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:43:45.323565  800812 cri.go:89] found id: ""
	I1007 13:43:45.323606  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.323619  800812 logs.go:284] No container was found matching "coredns"
	I1007 13:43:45.323627  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:43:45.323708  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:43:45.365920  800812 cri.go:89] found id: ""
	I1007 13:43:45.365955  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.365967  800812 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:43:45.365976  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:43:45.366052  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:43:45.409136  800812 cri.go:89] found id: ""
	I1007 13:43:45.409177  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.409189  800812 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:43:45.409199  800812 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:43:45.409268  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:43:45.455631  800812 cri.go:89] found id: ""
	I1007 13:43:45.455667  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.455676  800812 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:43:45.455683  800812 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:43:45.455746  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:43:45.512092  800812 cri.go:89] found id: ""
	I1007 13:43:45.512134  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.512146  800812 logs.go:284] No container was found matching "kindnet"
	I1007 13:43:45.512155  800812 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:43:45.512223  800812 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:43:45.561541  800812 cri.go:89] found id: ""
	I1007 13:43:45.561579  800812 logs.go:282] 0 containers: []
	W1007 13:43:45.561592  800812 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1007 13:43:45.561614  800812 logs.go:123] Gathering logs for container status ...
	I1007 13:43:45.561635  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:43:45.609728  800812 logs.go:123] Gathering logs for kubelet ...
	I1007 13:43:45.609765  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:43:45.662962  800812 logs.go:123] Gathering logs for dmesg ...
	I1007 13:43:45.663007  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:43:45.680441  800812 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:43:45.680496  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:43:45.768165  800812 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:43:45.768198  800812 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:43:45.768214  800812 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:43:45.889172  800812 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:43:45.889245  800812 out.go:270] * 
	W1007 13:43:45.889310  800812 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.889324  800812 out.go:270] * 
	W1007 13:43:45.890214  800812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:43:45.893670  800812 out.go:201] 
	W1007 13:43:45.895121  800812 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:43:45.895161  800812 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:43:45.895184  800812 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:43:45.896672  800812 out.go:201] 
	I1007 13:43:42.503307  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:43:42.523040  802960 api_server.go:72] duration metric: took 1.020293575s to wait for apiserver process to appear ...
	I1007 13:43:42.523069  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:43:42.523093  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:42.523750  802960 api_server.go:269] stopped: https://192.168.61.101:8444/healthz: Get "https://192.168.61.101:8444/healthz": dial tcp 192.168.61.101:8444: connect: connection refused
	I1007 13:43:43.023271  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.500619  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.500651  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.500665  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.544628  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 13:43:45.544688  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 13:43:45.544701  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:45.643845  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:45.643890  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.023194  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.029635  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.029672  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:46.523339  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:46.528709  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:46.528745  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.023901  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.032151  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:43:47.032192  802960 api_server.go:103] status: https://192.168.61.101:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:43:47.523593  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:43:47.531558  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:43:47.542161  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:43:47.542203  802960 api_server.go:131] duration metric: took 5.019126566s to wait for apiserver health ...
	I1007 13:43:47.542216  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:43:47.542227  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:43:47.544352  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:43:47.546075  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:43:47.560213  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:43:47.612380  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:43:47.633953  802960 system_pods.go:59] 8 kube-system pods found
	I1007 13:43:47.634015  802960 system_pods.go:61] "coredns-7c65d6cfc9-4nl8s" [798ab07d-53ab-45f3-9517-a3ea78152fc7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 13:43:47.634042  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [a3fd82bc-a9b5-4955-b3f8-d88c5bb5951d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 13:43:47.634058  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [431b750f-f9ca-4e27-a7db-6c758047acf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:43:47.634069  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [0289a6a2-f3b7-43fa-a97c-4464b93c2ecc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:43:47.634081  802960 system_pods.go:61] "kube-proxy-9s9p4" [8aeaf16d-764e-4da5-b27d-1915e33b3f2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 13:43:47.634102  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [4e5878d2-8ceb-4707-b2fd-834fd5f485be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 13:43:47.634114  802960 system_pods.go:61] "metrics-server-6867b74b74-s8v5f" [c498a0f1-ffb8-482d-b6be-ce04d3d6ff85] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:43:47.634120  802960 system_pods.go:61] "storage-provisioner" [c7754b45-21b7-4a4e-b21a-11c5e9eae07d] Running
	I1007 13:43:47.634133  802960 system_pods.go:74] duration metric: took 21.726405ms to wait for pod list to return data ...
	I1007 13:43:47.634143  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:43:47.646482  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:43:47.646520  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:43:47.646534  802960 node_conditions.go:105] duration metric: took 12.386071ms to run NodePressure ...
	I1007 13:43:47.646556  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 13:43:48.002169  802960 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007151  802960 kubeadm.go:739] kubelet initialised
	I1007 13:43:48.007183  802960 kubeadm.go:740] duration metric: took 4.972433ms waiting for restarted kubelet to initialise ...
	I1007 13:43:48.007211  802960 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:43:48.013961  802960 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:50.020725  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:52.020875  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:53.521602  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.521625  802960 pod_ready.go:82] duration metric: took 5.507628288s for pod "coredns-7c65d6cfc9-4nl8s" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.521637  802960 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529062  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:43:53.529090  802960 pod_ready.go:82] duration metric: took 7.446479ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:53.529101  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:43:55.536129  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:43:58.036214  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:00.535183  802960 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:02.035543  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.035567  802960 pod_ready.go:82] duration metric: took 8.506460378s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.035578  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040799  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.040823  802960 pod_ready.go:82] duration metric: took 5.237515ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.040833  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045410  802960 pod_ready.go:93] pod "kube-proxy-9s9p4" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.045434  802960 pod_ready.go:82] duration metric: took 4.593822ms for pod "kube-proxy-9s9p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.045444  802960 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049665  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:44:02.049691  802960 pod_ready.go:82] duration metric: took 4.239058ms for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:02.049701  802960 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	I1007 13:44:04.056407  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:06.062186  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:08.555372  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:10.556334  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:12.556423  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:14.557939  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:17.055829  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:19.056756  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:21.057049  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:23.058462  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:25.556545  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:27.556661  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:30.057123  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:32.057581  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:34.556797  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:37.055971  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:39.057054  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:41.057194  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:43.555532  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:45.556365  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:47.556508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:50.056070  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:52.056349  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:54.057809  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:56.556012  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:44:58.556338  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:00.558599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:03.058077  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:05.558375  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:07.558780  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:10.055494  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:12.057085  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:14.557752  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:17.056626  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:19.556724  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:22.057696  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:24.556552  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:27.056861  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:29.057505  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:31.555965  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:33.557729  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:35.557839  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:38.056814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:40.057838  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:42.058324  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:44.557202  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:47.056736  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:49.057871  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:51.556705  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:53.557023  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:55.557080  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:45:57.557599  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:00.057399  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:02.057880  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:04.556689  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:06.557381  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:09.057237  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:11.057328  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:13.556210  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:15.556303  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:17.556994  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:19.557835  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:22.056480  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:24.556325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:26.556600  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:28.556639  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:30.556983  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:33.056142  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:35.057034  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:37.057246  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:39.556678  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:42.056900  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:44.057207  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:46.057325  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:48.556417  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:51.056726  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:53.556598  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:55.557245  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:58.058116  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:00.059008  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:02.557074  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:05.056911  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:07.057374  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:09.556185  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:11.556584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:14.056433  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:16.056567  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:18.557584  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:21.056484  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:23.056610  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:25.058105  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:27.555814  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:29.556605  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:31.557226  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:34.057006  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.556126  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:38.556720  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:40.557339  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.055498  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:45.056400  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:47.056671  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:49.556490  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:52.056617  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:54.556079  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:56.556885  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:59.056725  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:01.560508  802960 pod_ready.go:103] pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.050835  802960 pod_ready.go:82] duration metric: took 4m0.001111748s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" ...
	E1007 13:48:02.050883  802960 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-s8v5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I1007 13:48:02.050910  802960 pod_ready.go:39] duration metric: took 4m14.0436862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:02.050947  802960 kubeadm.go:597] duration metric: took 4m23.217477497s to restartPrimaryControlPlane
	W1007 13:48:02.051112  802960 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 13:48:02.051179  802960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:48:28.304486  802960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.253272533s)
	I1007 13:48:28.304707  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:28.320794  802960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:48:28.332332  802960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:48:28.343070  802960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:48:28.343095  802960 kubeadm.go:157] found existing configuration files:
	
	I1007 13:48:28.343157  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1007 13:48:28.354012  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:48:28.354118  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:48:28.364581  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1007 13:48:28.375492  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:48:28.375560  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:48:28.386761  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.396663  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:48:28.396728  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:48:28.407316  802960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1007 13:48:28.417872  802960 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:48:28.417938  802960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:48:28.428569  802960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:48:28.476704  802960 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:48:28.476823  802960 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:48:28.590009  802960 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:48:28.590162  802960 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:48:28.590300  802960 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:48:28.600046  802960 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:48:28.602443  802960 out.go:235]   - Generating certificates and keys ...
	I1007 13:48:28.602559  802960 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:48:28.602623  802960 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:48:28.602711  802960 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:48:28.602790  802960 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:48:28.602884  802960 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:48:28.602931  802960 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:48:28.603008  802960 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:48:28.603118  802960 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:48:28.603256  802960 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:48:28.603372  802960 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:48:28.603429  802960 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:48:28.603498  802960 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:48:28.710739  802960 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:48:28.967010  802960 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:48:29.107742  802960 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:48:29.239779  802960 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:48:29.344572  802960 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:48:29.345301  802960 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:48:29.348025  802960 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:48:29.350415  802960 out.go:235]   - Booting up control plane ...
	I1007 13:48:29.350549  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:48:29.350650  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:48:29.350732  802960 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:48:29.369742  802960 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:48:29.379251  802960 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:48:29.379337  802960 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:48:29.527857  802960 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:48:29.528013  802960 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:48:30.528609  802960 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001343456s
	I1007 13:48:30.528741  802960 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:48:35.532432  802960 kubeadm.go:310] [api-check] The API server is healthy after 5.003996251s
	I1007 13:48:35.548242  802960 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:48:35.569290  802960 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:48:35.607149  802960 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:48:35.607386  802960 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-489319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:48:35.623965  802960 kubeadm.go:310] [bootstrap-token] Using token: 5jqtrt.7avot15frjqa3f3n
	I1007 13:48:35.626327  802960 out.go:235]   - Configuring RBAC rules ...
	I1007 13:48:35.626469  802960 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:48:35.632447  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:48:35.644119  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:48:35.653482  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:48:35.659903  802960 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:48:35.666151  802960 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:48:35.941468  802960 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:48:36.395332  802960 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:48:36.941654  802960 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:48:36.942749  802960 kubeadm.go:310] 
	I1007 13:48:36.942851  802960 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:48:36.942863  802960 kubeadm.go:310] 
	I1007 13:48:36.942955  802960 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:48:36.942966  802960 kubeadm.go:310] 
	I1007 13:48:36.942997  802960 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:48:36.943073  802960 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:48:36.943160  802960 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:48:36.943180  802960 kubeadm.go:310] 
	I1007 13:48:36.943247  802960 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:48:36.943254  802960 kubeadm.go:310] 
	I1007 13:48:36.943300  802960 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:48:36.943310  802960 kubeadm.go:310] 
	I1007 13:48:36.943379  802960 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:48:36.943477  802960 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:48:36.943559  802960 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:48:36.943567  802960 kubeadm.go:310] 
	I1007 13:48:36.943639  802960 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:48:36.943758  802960 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:48:36.943781  802960 kubeadm.go:310] 
	I1007 13:48:36.944023  802960 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944184  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:48:36.944212  802960 kubeadm.go:310] 	--control-plane 
	I1007 13:48:36.944225  802960 kubeadm.go:310] 
	I1007 13:48:36.944328  802960 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:48:36.944341  802960 kubeadm.go:310] 
	I1007 13:48:36.944441  802960 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5jqtrt.7avot15frjqa3f3n \
	I1007 13:48:36.944564  802960 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:48:36.946569  802960 kubeadm.go:310] W1007 13:48:28.442953    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.946947  802960 kubeadm.go:310] W1007 13:48:28.444068    2546 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:48:36.947056  802960 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:48:36.947089  802960 cni.go:84] Creating CNI manager for ""
	I1007 13:48:36.947100  802960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:48:36.949279  802960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:48:36.951020  802960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:48:36.966261  802960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:48:36.991447  802960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:48:36.991537  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:36.991576  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-489319 minikube.k8s.io/updated_at=2024_10_07T13_48_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=default-k8s-diff-port-489319 minikube.k8s.io/primary=true
	I1007 13:48:37.245837  802960 ops.go:34] apiserver oom_adj: -16
	I1007 13:48:37.253690  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:37.754572  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.254294  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:38.754766  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.253915  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:39.754118  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.254526  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:40.753887  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.254082  802960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:48:41.441338  802960 kubeadm.go:1113] duration metric: took 4.449876263s to wait for elevateKubeSystemPrivileges
	I1007 13:48:41.441397  802960 kubeadm.go:394] duration metric: took 5m2.66370907s to StartCluster
	I1007 13:48:41.441446  802960 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.441564  802960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:48:41.443987  802960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:48:41.444365  802960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.101 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:48:41.444449  802960 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:48:41.444606  802960 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444633  802960 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444647  802960 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:48:41.444644  802960 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:48:41.444669  802960 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444689  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444696  802960 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-489319"
	I1007 13:48:41.444748  802960 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.444763  802960 addons.go:243] addon metrics-server should already be in state true
	I1007 13:48:41.444799  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.444711  802960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-489319"
	I1007 13:48:41.445223  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445236  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445242  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.445285  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445305  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.445290  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.446533  802960 out.go:177] * Verifying Kubernetes components...
	I1007 13:48:41.448204  802960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:48:41.463351  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I1007 13:48:41.463547  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I1007 13:48:41.464007  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464024  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.464636  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464651  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.464667  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.464674  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.465115  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465118  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.465331  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.465770  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.465817  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.466630  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I1007 13:48:41.467414  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.468267  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.468293  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.468696  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.469177  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.469225  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.469939  802960 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-489319"
	W1007 13:48:41.469967  802960 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:48:41.470004  802960 host.go:66] Checking if "default-k8s-diff-port-489319" exists ...
	I1007 13:48:41.470429  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.470491  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.485835  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I1007 13:48:41.485934  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I1007 13:48:41.486390  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I1007 13:48:41.486401  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486694  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.486850  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.487029  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487048  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487286  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487314  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487375  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.487668  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.487692  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.487915  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.487940  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488170  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.488207  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.488812  802960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:48:41.488866  802960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:48:41.490870  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.491026  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.493370  802960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:48:41.493369  802960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:48:41.495269  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:48:41.495304  802960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:48:41.495335  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.495482  802960 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.495504  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:48:41.495525  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.499997  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500173  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500600  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500622  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.500819  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.500837  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.501010  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501125  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.501279  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501286  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501473  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.501657  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.501683  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.509460  802960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I1007 13:48:41.510229  802960 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:48:41.510898  802960 main.go:141] libmachine: Using API Version  1
	I1007 13:48:41.510934  802960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:48:41.511328  802960 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:48:41.511540  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetState
	I1007 13:48:41.513219  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .DriverName
	I1007 13:48:41.513712  802960 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.513734  802960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:48:41.513759  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHHostname
	I1007 13:48:41.517041  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517439  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:71:ec", ip: ""} in network mk-default-k8s-diff-port-489319: {Iface:virbr1 ExpiryTime:2024-10-07 14:35:09 +0000 UTC Type:0 Mac:52:54:00:5a:71:ec Iaid: IPaddr:192.168.61.101 Prefix:24 Hostname:default-k8s-diff-port-489319 Clientid:01:52:54:00:5a:71:ec}
	I1007 13:48:41.517462  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | domain default-k8s-diff-port-489319 has defined IP address 192.168.61.101 and MAC address 52:54:00:5a:71:ec in network mk-default-k8s-diff-port-489319
	I1007 13:48:41.517630  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHPort
	I1007 13:48:41.517885  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHKeyPath
	I1007 13:48:41.518121  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .GetSSHUsername
	I1007 13:48:41.518301  802960 sshutil.go:53] new ssh client: &{IP:192.168.61.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/default-k8s-diff-port-489319/id_rsa Username:docker}
	I1007 13:48:41.674144  802960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:48:41.742749  802960 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753582  802960 node_ready.go:49] node "default-k8s-diff-port-489319" has status "Ready":"True"
	I1007 13:48:41.753616  802960 node_ready.go:38] duration metric: took 10.764539ms for node "default-k8s-diff-port-489319" to be "Ready" ...
	I1007 13:48:41.753630  802960 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:41.769510  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:41.796357  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:48:41.844420  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:48:41.871099  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:48:41.871126  802960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:48:41.978289  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:48:41.978325  802960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:48:42.063366  802960 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.063399  802960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:48:42.204106  802960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:48:42.261831  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.261861  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.262168  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.262192  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.262202  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.262209  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.263023  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.263040  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.285756  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:42.285786  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:42.286112  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:42.286135  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:42.286145  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044454  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.199980665s)
	I1007 13:48:43.044515  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044524  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.044892  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.044910  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.044926  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.044934  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.044942  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.045192  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.045208  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.045193  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303372  802960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.099210402s)
	I1007 13:48:43.303432  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303452  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.303783  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.303801  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.303799  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) DBG | Closing plugin on server side
	I1007 13:48:43.303811  802960 main.go:141] libmachine: Making call to close driver server
	I1007 13:48:43.303821  802960 main.go:141] libmachine: (default-k8s-diff-port-489319) Calling .Close
	I1007 13:48:43.304077  802960 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:48:43.304094  802960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:48:43.304107  802960 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-489319"
	I1007 13:48:43.306084  802960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1007 13:48:43.307478  802960 addons.go:510] duration metric: took 1.863046306s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1007 13:48:43.778309  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:45.778814  802960 pod_ready.go:103] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:47.775390  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:47.775417  802960 pod_ready.go:82] duration metric: took 6.005863403s for pod "coredns-7c65d6cfc9-mrgdp" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:47.775431  802960 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789544  802960 pod_ready.go:93] pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.789573  802960 pod_ready.go:82] duration metric: took 1.01413369s for pod "coredns-7c65d6cfc9-szgtd" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.789587  802960 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796239  802960 pod_ready.go:93] pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.796267  802960 pod_ready.go:82] duration metric: took 6.671875ms for pod "etcd-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.796280  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.806996  802960 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.807030  802960 pod_ready.go:82] duration metric: took 10.740949ms for pod "kube-apiserver-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.807046  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814301  802960 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.814335  802960 pod_ready.go:82] duration metric: took 7.279716ms for pod "kube-controller-manager-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.814350  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976171  802960 pod_ready.go:93] pod "kube-proxy-jpvx5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:48.976198  802960 pod_ready.go:82] duration metric: took 161.84042ms for pod "kube-proxy-jpvx5" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:48.976209  802960 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175024  802960 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace has status "Ready":"True"
	I1007 13:48:50.175051  802960 pod_ready.go:82] duration metric: took 1.198834555s for pod "kube-scheduler-default-k8s-diff-port-489319" in "kube-system" namespace to be "Ready" ...
	I1007 13:48:50.175062  802960 pod_ready.go:39] duration metric: took 8.42141844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:48:50.175094  802960 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:48:50.175154  802960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:48:50.190906  802960 api_server.go:72] duration metric: took 8.746497817s to wait for apiserver process to appear ...
	I1007 13:48:50.190937  802960 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:48:50.190969  802960 api_server.go:253] Checking apiserver healthz at https://192.168.61.101:8444/healthz ...
	I1007 13:48:50.196727  802960 api_server.go:279] https://192.168.61.101:8444/healthz returned 200:
	ok
	I1007 13:48:50.197751  802960 api_server.go:141] control plane version: v1.31.1
	I1007 13:48:50.197774  802960 api_server.go:131] duration metric: took 6.829939ms to wait for apiserver health ...
	I1007 13:48:50.197783  802960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:48:50.378985  802960 system_pods.go:59] 9 kube-system pods found
	I1007 13:48:50.379015  802960 system_pods.go:61] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.379023  802960 system_pods.go:61] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.379029  802960 system_pods.go:61] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.379034  802960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.379041  802960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.379045  802960 system_pods.go:61] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.379050  802960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.379059  802960 system_pods.go:61] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.379066  802960 system_pods.go:61] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.379078  802960 system_pods.go:74] duration metric: took 181.288145ms to wait for pod list to return data ...
	I1007 13:48:50.379091  802960 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:48:50.574098  802960 default_sa.go:45] found service account: "default"
	I1007 13:48:50.574127  802960 default_sa.go:55] duration metric: took 195.025343ms for default service account to be created ...
	I1007 13:48:50.574137  802960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:48:50.777201  802960 system_pods.go:86] 9 kube-system pods found
	I1007 13:48:50.777233  802960 system_pods.go:89] "coredns-7c65d6cfc9-mrgdp" [a412fc5b-c29a-403d-87c3-2d0d035890fa] Running
	I1007 13:48:50.777238  802960 system_pods.go:89] "coredns-7c65d6cfc9-szgtd" [579c2478-e31e-41a7-b18b-749e86c54764] Running
	I1007 13:48:50.777243  802960 system_pods.go:89] "etcd-default-k8s-diff-port-489319" [8e728caa-27bf-4982-ac03-45ffbe158203] Running
	I1007 13:48:50.777247  802960 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-489319" [eebbf078-2635-42b8-a0a9-6495290d50d9] Running
	I1007 13:48:50.777252  802960 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-489319" [49814be9-ccfa-401e-a55a-1a59795ef7a7] Running
	I1007 13:48:50.777257  802960 system_pods.go:89] "kube-proxy-jpvx5" [df825f23-4b34-44f3-a641-905c8bdc25ab] Running
	I1007 13:48:50.777260  802960 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-489319" [7efc9619-57c8-40ed-a9ed-56e85c0dcebe] Running
	I1007 13:48:50.777269  802960 system_pods.go:89] "metrics-server-6867b74b74-drcg5" [c88368de-954a-484b-8332-a05bfb0b6c9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:48:50.777273  802960 system_pods.go:89] "storage-provisioner" [23077570-0411-48e4-9f38-2933e98132b6] Running
	I1007 13:48:50.777283  802960 system_pods.go:126] duration metric: took 203.138905ms to wait for k8s-apps to be running ...
	I1007 13:48:50.777292  802960 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:48:50.777338  802960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:50.794312  802960 system_svc.go:56] duration metric: took 17.00771ms WaitForService to wait for kubelet
	I1007 13:48:50.794350  802960 kubeadm.go:582] duration metric: took 9.349947078s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:48:50.794376  802960 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:48:50.974457  802960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:48:50.974484  802960 node_conditions.go:123] node cpu capacity is 2
	I1007 13:48:50.974507  802960 node_conditions.go:105] duration metric: took 180.125373ms to run NodePressure ...
	I1007 13:48:50.974520  802960 start.go:241] waiting for startup goroutines ...
	I1007 13:48:50.974526  802960 start.go:246] waiting for cluster config update ...
	I1007 13:48:50.974537  802960 start.go:255] writing updated cluster config ...
	I1007 13:48:50.974827  802960 ssh_runner.go:195] Run: rm -f paused
	I1007 13:48:51.030094  802960 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:48:51.032736  802960 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-489319" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.693717452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309293693687707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9e6ce3f-e9cc-4838-a490-ee17a9e4f04c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.694600039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d20e7b36-dddc-408a-8a82-9a25141dc3a0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.694679673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d20e7b36-dddc-408a-8a82-9a25141dc3a0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.694716390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d20e7b36-dddc-408a-8a82-9a25141dc3a0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.729313871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00ab3c56-46cf-4b7a-88fd-e9db4416bf60 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.729459174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00ab3c56-46cf-4b7a-88fd-e9db4416bf60 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.730752303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00705b0e-f864-4ea3-aa5a-8d189687cbe6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.731374085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309293731336544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00705b0e-f864-4ea3-aa5a-8d189687cbe6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.732184858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03020b25-836a-4941-bea2-04d1f9541d1c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.732239976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03020b25-836a-4941-bea2-04d1f9541d1c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.732275105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=03020b25-836a-4941-bea2-04d1f9541d1c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.774085274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb877c93-d8b0-42b3-9c93-301e3c0b6a0f name=/runtime.v1.RuntimeService/Version
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.774192515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb877c93-d8b0-42b3-9c93-301e3c0b6a0f name=/runtime.v1.RuntimeService/Version
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.775230037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0294a0b3-e0be-4428-819e-075539f53485 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.775692985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309293775662193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0294a0b3-e0be-4428-819e-075539f53485 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.776319698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64f1e897-7353-467d-8476-6ed7045d0f78 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.776377106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64f1e897-7353-467d-8476-6ed7045d0f78 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.776461640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=64f1e897-7353-467d-8476-6ed7045d0f78 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.810949619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=325b2ce3-acba-4194-bda8-a0d50edd5ce1 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.811079440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=325b2ce3-acba-4194-bda8-a0d50edd5ce1 name=/runtime.v1.RuntimeService/Version
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.812538909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70fc36a3-6e6d-4f64-ba86-ff7f80b6731a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.813055054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309293813026570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70fc36a3-6e6d-4f64-ba86-ff7f80b6731a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.814112966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bad4cfab-3846-4a06-b3fa-101e04e4f9eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.814179082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bad4cfab-3846-4a06-b3fa-101e04e4f9eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 13:54:53 old-k8s-version-120978 crio[632]: time="2024-10-07 13:54:53.814216838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bad4cfab-3846-4a06-b3fa-101e04e4f9eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 7 13:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059927] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045313] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.123867] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.762449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.678964] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.628433] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.062444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070622] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.220328] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.150806] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.291850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +7.145908] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.820671] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[Oct 7 13:36] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 7 13:39] systemd-fstab-generator[5058]: Ignoring "noauto" option for root device
	[Oct 7 13:41] systemd-fstab-generator[5332]: Ignoring "noauto" option for root device
	[  +0.074388] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:54:54 up 19 min,  0 users,  load average: 0.03, 0.07, 0.04
	Linux old-k8s-version-120978 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a60c0, 0xc0009f2fc0)
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: goroutine 155 [select]:
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b59ef0, 0x4f0ac20, 0xc000b3c7d0, 0x1, 0xc0000a60c0)
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000360d20, 0xc0000a60c0)
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b2c600, 0xc000b32c80)
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6805]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 07 13:54:53 old-k8s-version-120978 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 07 13:54:53 old-k8s-version-120978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 07 13:54:53 old-k8s-version-120978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 136.
	Oct 07 13:54:53 old-k8s-version-120978 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 07 13:54:53 old-k8s-version-120978 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6876]: I1007 13:54:53.955578    6876 server.go:416] Version: v1.20.0
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6876]: I1007 13:54:53.955997    6876 server.go:837] Client rotation is on, will bootstrap in background
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6876]: I1007 13:54:53.958064    6876 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6876]: W1007 13:54:53.959076    6876 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 07 13:54:53 old-k8s-version-120978 kubelet[6876]: I1007 13:54:53.959085    6876 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 2 (256.923278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-120978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (124.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (179.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-07 14:00:51.548407821 +0000 UTC m=+6784.746947801
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-489319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.425µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-489319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-489319 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-489319 logs -n 25: (1.336546706s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-221184 sudo                               | flannel-221184 | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-221184 sudo                               | flannel-221184 | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo cat                            | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo cat                            | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| delete  | -p flannel-221184                                    | flannel-221184 | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo docker                         | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo cat                            | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo cat                            | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo cat                            | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo cat                            | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo                                | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo find                           | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-221184 sudo crio                           | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-221184                                     | bridge-221184  | jenkins | v1.34.0 | 07 Oct 24 14:00 UTC | 07 Oct 24 14:00 UTC |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:59:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:59:09.949405  816252 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:59:09.949666  816252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:59:09.949675  816252 out.go:358] Setting ErrFile to fd 2...
	I1007 13:59:09.949680  816252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:59:09.949863  816252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:59:09.950485  816252 out.go:352] Setting JSON to false
	I1007 13:59:09.951636  816252 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13299,"bootTime":1728296251,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:59:09.951711  816252 start.go:139] virtualization: kvm guest
	I1007 13:59:09.954142  816252 out.go:177] * [bridge-221184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:59:09.955545  816252 notify.go:220] Checking for updates...
	I1007 13:59:09.956872  816252 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:59:09.958189  816252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:59:09.959343  816252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:59:09.960618  816252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:59:09.961887  816252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:59:09.963336  816252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:59:09.966194  816252 config.go:182] Loaded profile config "default-k8s-diff-port-489319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:59:09.966306  816252 config.go:182] Loaded profile config "enable-default-cni-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:59:09.966402  816252 config.go:182] Loaded profile config "flannel-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:59:09.966515  816252 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:59:10.006992  816252 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:59:10.008492  816252 start.go:297] selected driver: kvm2
	I1007 13:59:10.008514  816252 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:59:10.008528  816252 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:59:10.009439  816252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:59:10.009533  816252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:59:10.026831  816252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:59:10.026891  816252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:59:10.027175  816252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:59:10.027218  816252 cni.go:84] Creating CNI manager for "bridge"
	I1007 13:59:10.027227  816252 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 13:59:10.027284  816252 start.go:340] cluster config:
	{Name:bridge-221184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:59:10.027434  816252 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:59:10.030681  816252 out.go:177] * Starting "bridge-221184" primary control-plane node in "bridge-221184" cluster
	I1007 13:59:08.615279  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:10.616352  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:13.114239  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:12.488875  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:12.489382  814767 main.go:141] libmachine: (flannel-221184) DBG | unable to find current IP address of domain flannel-221184 in network mk-flannel-221184
	I1007 13:59:12.489409  814767 main.go:141] libmachine: (flannel-221184) DBG | I1007 13:59:12.489349  814789 retry.go:31] will retry after 4.737840754s: waiting for machine to come up
	I1007 13:59:10.032595  816252 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:59:10.032676  816252 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:59:10.032690  816252 cache.go:56] Caching tarball of preloaded images
	I1007 13:59:10.032860  816252 preload.go:172] Found /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:59:10.032879  816252 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:59:10.033015  816252 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/config.json ...
	I1007 13:59:10.033039  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/config.json: {Name:mk48acd6864f6050646078952e1ad38029389718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:10.033214  816252 start.go:360] acquireMachinesLock for bridge-221184: {Name:mkaa77fcf81b6efa3134e7d933d5a8dd0adc7dfc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:59:15.116223  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:15.614799  812940 pod_ready.go:98] pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.226 HostIPs:[{IP:192.168.50
.226}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-10-07 13:59:03 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-07 13:59:05 +0000 UTC,FinishedAt:2024-10-07 13:59:15 +0000 UTC,ContainerID:cri-o://f55c530be96b7774255c61a97aff521c7082f01533f1bf08d0cf34d0a0c70e29,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f55c530be96b7774255c61a97aff521c7082f01533f1bf08d0cf34d0a0c70e29 Started:0xc001a3a7b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0006b5170} {Name:kube-api-access-wlj99 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0006b5180}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1007 13:59:15.614834  812940 pod_ready.go:82] duration metric: took 11.00746385s for pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace to be "Ready" ...
	E1007 13:59:15.614850  812940 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-swvg9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-07 13:59:03 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.226 HostIPs:[{IP:192.168.50.226}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-10-07 13:59:03 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-07 13:59:05 +0000 UTC,FinishedAt:2024-10-07 13:59:15 +0000 UTC,ContainerID:cri-o://f55c530be96b7774255c61a97aff521c7082f01533f1bf08d0cf34d0a0c70e29,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f55c530be96b7774255c61a97aff521c7082f01533f1bf08d0cf34d0a0c70e29 Started:0xc001a3a7b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0006b5170} {Name:kube-api-access-wlj99 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0006b5180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1007 13:59:15.614863  812940 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:17.623243  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:18.931559  816252 start.go:364] duration metric: took 8.898316115s to acquireMachinesLock for "bridge-221184"
	I1007 13:59:18.931634  816252 start.go:93] Provisioning new machine with config: &{Name:bridge-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:59:18.931758  816252 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:59:17.228418  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.228939  814767 main.go:141] libmachine: (flannel-221184) Found IP for machine: 192.168.39.119
	I1007 13:59:17.228959  814767 main.go:141] libmachine: (flannel-221184) Reserving static IP address...
	I1007 13:59:17.229009  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has current primary IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.229463  814767 main.go:141] libmachine: (flannel-221184) DBG | unable to find host DHCP lease matching {name: "flannel-221184", mac: "52:54:00:36:cc:79", ip: "192.168.39.119"} in network mk-flannel-221184
	I1007 13:59:17.321893  814767 main.go:141] libmachine: (flannel-221184) DBG | Getting to WaitForSSH function...
	I1007 13:59:17.321932  814767 main.go:141] libmachine: (flannel-221184) Reserved static IP address: 192.168.39.119
	I1007 13:59:17.321980  814767 main.go:141] libmachine: (flannel-221184) Waiting for SSH to be available...
	I1007 13:59:17.324874  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.325544  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.325577  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.325642  814767 main.go:141] libmachine: (flannel-221184) DBG | Using SSH client type: external
	I1007 13:59:17.325678  814767 main.go:141] libmachine: (flannel-221184) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa (-rw-------)
	I1007 13:59:17.325711  814767 main.go:141] libmachine: (flannel-221184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:59:17.325738  814767 main.go:141] libmachine: (flannel-221184) DBG | About to run SSH command:
	I1007 13:59:17.325759  814767 main.go:141] libmachine: (flannel-221184) DBG | exit 0
	I1007 13:59:17.454070  814767 main.go:141] libmachine: (flannel-221184) DBG | SSH cmd err, output: <nil>: 
	I1007 13:59:17.454360  814767 main.go:141] libmachine: (flannel-221184) KVM machine creation complete!
	I1007 13:59:17.454722  814767 main.go:141] libmachine: (flannel-221184) Calling .GetConfigRaw
	I1007 13:59:17.455513  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:17.455698  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:17.455854  814767 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:59:17.455868  814767 main.go:141] libmachine: (flannel-221184) Calling .GetState
	I1007 13:59:17.457126  814767 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:59:17.457143  814767 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:59:17.457150  814767 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:59:17.457159  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:17.460849  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.461329  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.461358  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.461547  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:17.461784  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.461969  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.462172  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:17.462384  814767 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:17.462656  814767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 13:59:17.462673  814767 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:59:17.573716  814767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:59:17.573745  814767 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:59:17.573758  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:17.576802  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.577240  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.577271  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.577415  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:17.577634  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.577829  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.578004  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:17.578246  814767 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:17.578498  814767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 13:59:17.578520  814767 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:59:17.692305  814767 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:59:17.692416  814767 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:59:17.692428  814767 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:59:17.692440  814767 main.go:141] libmachine: (flannel-221184) Calling .GetMachineName
	I1007 13:59:17.692746  814767 buildroot.go:166] provisioning hostname "flannel-221184"
	I1007 13:59:17.692775  814767 main.go:141] libmachine: (flannel-221184) Calling .GetMachineName
	I1007 13:59:17.693062  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:17.696449  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.696902  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.696932  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.697091  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:17.697350  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.697536  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.697778  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:17.698088  814767 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:17.698323  814767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 13:59:17.698344  814767 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-221184 && echo "flannel-221184" | sudo tee /etc/hostname
	I1007 13:59:17.826579  814767 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-221184
	
	I1007 13:59:17.826609  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:17.829493  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.829844  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.829890  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.830085  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:17.830258  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.830395  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:17.830533  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:17.830807  814767 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:17.831042  814767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 13:59:17.831061  814767 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-221184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-221184/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-221184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:59:17.952118  814767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:59:17.952154  814767 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:59:17.952201  814767 buildroot.go:174] setting up certificates
	I1007 13:59:17.952223  814767 provision.go:84] configureAuth start
	I1007 13:59:17.952239  814767 main.go:141] libmachine: (flannel-221184) Calling .GetMachineName
	I1007 13:59:17.952479  814767 main.go:141] libmachine: (flannel-221184) Calling .GetIP
	I1007 13:59:17.955482  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.955887  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.955910  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.956092  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:17.958138  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.958476  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:17.958506  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:17.958619  814767 provision.go:143] copyHostCerts
	I1007 13:59:17.958684  814767 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:59:17.958714  814767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:59:17.958801  814767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:59:17.958929  814767 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:59:17.958944  814767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:59:17.958983  814767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:59:17.959057  814767 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:59:17.959068  814767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:59:17.959098  814767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:59:17.959163  814767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.flannel-221184 san=[127.0.0.1 192.168.39.119 flannel-221184 localhost minikube]
	I1007 13:59:18.254432  814767 provision.go:177] copyRemoteCerts
	I1007 13:59:18.254530  814767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:59:18.254560  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:18.258384  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.258713  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.258751  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.258892  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:18.259110  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.259289  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:18.259495  814767 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa Username:docker}
	I1007 13:59:18.346956  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:59:18.375418  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1007 13:59:18.402069  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:59:18.427966  814767 provision.go:87] duration metric: took 475.702706ms to configureAuth
	I1007 13:59:18.428020  814767 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:59:18.428211  814767 config.go:182] Loaded profile config "flannel-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:59:18.428310  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:18.431213  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.431760  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.431801  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.431995  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:18.432207  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.432355  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.432477  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:18.432623  814767 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:18.432840  814767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 13:59:18.432863  814767 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:59:18.669132  814767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:59:18.669158  814767 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:59:18.669166  814767 main.go:141] libmachine: (flannel-221184) Calling .GetURL
	I1007 13:59:18.670613  814767 main.go:141] libmachine: (flannel-221184) DBG | Using libvirt version 6000000
	I1007 13:59:18.673201  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.673541  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.673576  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.673785  814767 main.go:141] libmachine: Docker is up and running!
	I1007 13:59:18.673810  814767 main.go:141] libmachine: Reticulating splines...
	I1007 13:59:18.673819  814767 client.go:171] duration metric: took 24.060534805s to LocalClient.Create
	I1007 13:59:18.673848  814767 start.go:167] duration metric: took 24.06061908s to libmachine.API.Create "flannel-221184"
	I1007 13:59:18.673862  814767 start.go:293] postStartSetup for "flannel-221184" (driver="kvm2")
	I1007 13:59:18.673875  814767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:59:18.673897  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:18.674166  814767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:59:18.674195  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:18.676546  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.676951  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.676987  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.677121  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:18.677365  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.677551  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:18.677717  814767 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa Username:docker}
	I1007 13:59:18.766342  814767 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:59:18.771349  814767 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:59:18.771381  814767 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:59:18.771447  814767 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:59:18.771542  814767 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:59:18.771637  814767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:59:18.783644  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:59:18.809886  814767 start.go:296] duration metric: took 136.006102ms for postStartSetup
	I1007 13:59:18.809963  814767 main.go:141] libmachine: (flannel-221184) Calling .GetConfigRaw
	I1007 13:59:18.810636  814767 main.go:141] libmachine: (flannel-221184) Calling .GetIP
	I1007 13:59:18.813136  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.813566  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.813601  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.813890  814767 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/config.json ...
	I1007 13:59:18.814108  814767 start.go:128] duration metric: took 24.222627597s to createHost
	I1007 13:59:18.814133  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:18.816464  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.816905  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.816934  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.817113  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:18.817337  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.817477  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.817644  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:18.817795  814767 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:18.817975  814767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 13:59:18.817989  814767 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:59:18.931342  814767 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728309558.900136497
	
	I1007 13:59:18.931373  814767 fix.go:216] guest clock: 1728309558.900136497
	I1007 13:59:18.931386  814767 fix.go:229] Guest: 2024-10-07 13:59:18.900136497 +0000 UTC Remote: 2024-10-07 13:59:18.814121967 +0000 UTC m=+24.361669686 (delta=86.01453ms)
	I1007 13:59:18.931433  814767 fix.go:200] guest clock delta is within tolerance: 86.01453ms
	I1007 13:59:18.931441  814767 start.go:83] releasing machines lock for "flannel-221184", held for 24.340063589s
	I1007 13:59:18.931473  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:18.931764  814767 main.go:141] libmachine: (flannel-221184) Calling .GetIP
	I1007 13:59:18.935407  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.935679  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.935708  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.935924  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:18.936517  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:18.936730  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:18.936821  814767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:59:18.936877  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:18.936963  814767 ssh_runner.go:195] Run: cat /version.json
	I1007 13:59:18.937000  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:18.940132  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.940240  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.940593  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.940628  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:18.940653  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.940732  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:18.940856  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:18.941018  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:18.941099  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.941232  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:18.941281  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:18.941363  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:18.941414  814767 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa Username:docker}
	I1007 13:59:18.941466  814767 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa Username:docker}
	I1007 13:59:19.046192  814767 ssh_runner.go:195] Run: systemctl --version
	I1007 13:59:19.053870  814767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:59:19.225025  814767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:59:19.231857  814767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:59:19.231933  814767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:59:19.251513  814767 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:59:19.251546  814767 start.go:495] detecting cgroup driver to use...
	I1007 13:59:19.251622  814767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:59:19.284587  814767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:59:19.301044  814767 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:59:19.301152  814767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:59:19.320753  814767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:59:19.337827  814767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:59:19.466452  814767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:59:18.934233  816252 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 13:59:18.934460  816252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:59:18.934625  816252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:59:18.951850  816252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I1007 13:59:18.952361  816252 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:59:18.952961  816252 main.go:141] libmachine: Using API Version  1
	I1007 13:59:18.952986  816252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:59:18.953400  816252 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:59:18.953597  816252 main.go:141] libmachine: (bridge-221184) Calling .GetMachineName
	I1007 13:59:18.953754  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:18.954012  816252 start.go:159] libmachine.API.Create for "bridge-221184" (driver="kvm2")
	I1007 13:59:18.954095  816252 client.go:168] LocalClient.Create starting
	I1007 13:59:18.954146  816252 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem
	I1007 13:59:18.954206  816252 main.go:141] libmachine: Decoding PEM data...
	I1007 13:59:18.954230  816252 main.go:141] libmachine: Parsing certificate...
	I1007 13:59:18.954308  816252 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem
	I1007 13:59:18.954336  816252 main.go:141] libmachine: Decoding PEM data...
	I1007 13:59:18.954355  816252 main.go:141] libmachine: Parsing certificate...
	I1007 13:59:18.954379  816252 main.go:141] libmachine: Running pre-create checks...
	I1007 13:59:18.954403  816252 main.go:141] libmachine: (bridge-221184) Calling .PreCreateCheck
	I1007 13:59:18.954782  816252 main.go:141] libmachine: (bridge-221184) Calling .GetConfigRaw
	I1007 13:59:18.955255  816252 main.go:141] libmachine: Creating machine...
	I1007 13:59:18.955273  816252 main.go:141] libmachine: (bridge-221184) Calling .Create
	I1007 13:59:18.955393  816252 main.go:141] libmachine: (bridge-221184) Creating KVM machine...
	I1007 13:59:18.956945  816252 main.go:141] libmachine: (bridge-221184) DBG | found existing default KVM network
	I1007 13:59:18.958612  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:18.958441  816340 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:15:8b:83} reservation:<nil>}
	I1007 13:59:18.959572  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:18.959491  816340 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a5:da:43} reservation:<nil>}
	I1007 13:59:18.960404  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:18.960284  816340 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:40:af} reservation:<nil>}
	I1007 13:59:18.961570  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:18.961477  816340 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5a50}
	I1007 13:59:18.961597  816252 main.go:141] libmachine: (bridge-221184) DBG | created network xml: 
	I1007 13:59:18.961628  816252 main.go:141] libmachine: (bridge-221184) DBG | <network>
	I1007 13:59:18.961654  816252 main.go:141] libmachine: (bridge-221184) DBG |   <name>mk-bridge-221184</name>
	I1007 13:59:18.961668  816252 main.go:141] libmachine: (bridge-221184) DBG |   <dns enable='no'/>
	I1007 13:59:18.961676  816252 main.go:141] libmachine: (bridge-221184) DBG |   
	I1007 13:59:18.961695  816252 main.go:141] libmachine: (bridge-221184) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1007 13:59:18.961703  816252 main.go:141] libmachine: (bridge-221184) DBG |     <dhcp>
	I1007 13:59:18.961712  816252 main.go:141] libmachine: (bridge-221184) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1007 13:59:18.961722  816252 main.go:141] libmachine: (bridge-221184) DBG |     </dhcp>
	I1007 13:59:18.961730  816252 main.go:141] libmachine: (bridge-221184) DBG |   </ip>
	I1007 13:59:18.961739  816252 main.go:141] libmachine: (bridge-221184) DBG |   
	I1007 13:59:18.961747  816252 main.go:141] libmachine: (bridge-221184) DBG | </network>
	I1007 13:59:18.961754  816252 main.go:141] libmachine: (bridge-221184) DBG | 
	I1007 13:59:18.967615  816252 main.go:141] libmachine: (bridge-221184) DBG | trying to create private KVM network mk-bridge-221184 192.168.72.0/24...
	I1007 13:59:19.051292  816252 main.go:141] libmachine: (bridge-221184) Setting up store path in /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184 ...
	I1007 13:59:19.051321  816252 main.go:141] libmachine: (bridge-221184) DBG | private KVM network mk-bridge-221184 192.168.72.0/24 created
	I1007 13:59:19.051349  816252 main.go:141] libmachine: (bridge-221184) Building disk image from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:59:19.051362  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:19.051200  816340 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:59:19.051391  816252 main.go:141] libmachine: (bridge-221184) Downloading /home/jenkins/minikube-integration/18424-747025/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:59:19.329770  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:19.329583  816340 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa...
	I1007 13:59:19.447075  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:19.446915  816340 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/bridge-221184.rawdisk...
	I1007 13:59:19.447113  816252 main.go:141] libmachine: (bridge-221184) DBG | Writing magic tar header
	I1007 13:59:19.447127  816252 main.go:141] libmachine: (bridge-221184) DBG | Writing SSH key tar header
	I1007 13:59:19.447163  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:19.447034  816340 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184 ...
	I1007 13:59:19.447180  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184
	I1007 13:59:19.447198  816252 main.go:141] libmachine: (bridge-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184 (perms=drwx------)
	I1007 13:59:19.447213  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube/machines
	I1007 13:59:19.447230  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:59:19.447245  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18424-747025
	I1007 13:59:19.447260  816252 main.go:141] libmachine: (bridge-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:59:19.447274  816252 main.go:141] libmachine: (bridge-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025/.minikube (perms=drwxr-xr-x)
	I1007 13:59:19.447283  816252 main.go:141] libmachine: (bridge-221184) Setting executable bit set on /home/jenkins/minikube-integration/18424-747025 (perms=drwxrwxr-x)
	I1007 13:59:19.447292  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:59:19.447307  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:59:19.447318  816252 main.go:141] libmachine: (bridge-221184) DBG | Checking permissions on dir: /home
	I1007 13:59:19.447328  816252 main.go:141] libmachine: (bridge-221184) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:59:19.447339  816252 main.go:141] libmachine: (bridge-221184) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:59:19.447351  816252 main.go:141] libmachine: (bridge-221184) DBG | Skipping /home - not owner
	I1007 13:59:19.447385  816252 main.go:141] libmachine: (bridge-221184) Creating domain...
	I1007 13:59:19.448605  816252 main.go:141] libmachine: (bridge-221184) define libvirt domain using xml: 
	I1007 13:59:19.448637  816252 main.go:141] libmachine: (bridge-221184) <domain type='kvm'>
	I1007 13:59:19.448648  816252 main.go:141] libmachine: (bridge-221184)   <name>bridge-221184</name>
	I1007 13:59:19.448655  816252 main.go:141] libmachine: (bridge-221184)   <memory unit='MiB'>3072</memory>
	I1007 13:59:19.448664  816252 main.go:141] libmachine: (bridge-221184)   <vcpu>2</vcpu>
	I1007 13:59:19.448670  816252 main.go:141] libmachine: (bridge-221184)   <features>
	I1007 13:59:19.448704  816252 main.go:141] libmachine: (bridge-221184)     <acpi/>
	I1007 13:59:19.448733  816252 main.go:141] libmachine: (bridge-221184)     <apic/>
	I1007 13:59:19.448744  816252 main.go:141] libmachine: (bridge-221184)     <pae/>
	I1007 13:59:19.448751  816252 main.go:141] libmachine: (bridge-221184)     
	I1007 13:59:19.448763  816252 main.go:141] libmachine: (bridge-221184)   </features>
	I1007 13:59:19.448773  816252 main.go:141] libmachine: (bridge-221184)   <cpu mode='host-passthrough'>
	I1007 13:59:19.448782  816252 main.go:141] libmachine: (bridge-221184)   
	I1007 13:59:19.448791  816252 main.go:141] libmachine: (bridge-221184)   </cpu>
	I1007 13:59:19.448798  816252 main.go:141] libmachine: (bridge-221184)   <os>
	I1007 13:59:19.448808  816252 main.go:141] libmachine: (bridge-221184)     <type>hvm</type>
	I1007 13:59:19.448835  816252 main.go:141] libmachine: (bridge-221184)     <boot dev='cdrom'/>
	I1007 13:59:19.448859  816252 main.go:141] libmachine: (bridge-221184)     <boot dev='hd'/>
	I1007 13:59:19.448870  816252 main.go:141] libmachine: (bridge-221184)     <bootmenu enable='no'/>
	I1007 13:59:19.448889  816252 main.go:141] libmachine: (bridge-221184)   </os>
	I1007 13:59:19.448900  816252 main.go:141] libmachine: (bridge-221184)   <devices>
	I1007 13:59:19.448911  816252 main.go:141] libmachine: (bridge-221184)     <disk type='file' device='cdrom'>
	I1007 13:59:19.448925  816252 main.go:141] libmachine: (bridge-221184)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/boot2docker.iso'/>
	I1007 13:59:19.448937  816252 main.go:141] libmachine: (bridge-221184)       <target dev='hdc' bus='scsi'/>
	I1007 13:59:19.448957  816252 main.go:141] libmachine: (bridge-221184)       <readonly/>
	I1007 13:59:19.448976  816252 main.go:141] libmachine: (bridge-221184)     </disk>
	I1007 13:59:19.448998  816252 main.go:141] libmachine: (bridge-221184)     <disk type='file' device='disk'>
	I1007 13:59:19.449025  816252 main.go:141] libmachine: (bridge-221184)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:59:19.449044  816252 main.go:141] libmachine: (bridge-221184)       <source file='/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/bridge-221184.rawdisk'/>
	I1007 13:59:19.449056  816252 main.go:141] libmachine: (bridge-221184)       <target dev='hda' bus='virtio'/>
	I1007 13:59:19.449068  816252 main.go:141] libmachine: (bridge-221184)     </disk>
	I1007 13:59:19.449079  816252 main.go:141] libmachine: (bridge-221184)     <interface type='network'>
	I1007 13:59:19.449093  816252 main.go:141] libmachine: (bridge-221184)       <source network='mk-bridge-221184'/>
	I1007 13:59:19.449108  816252 main.go:141] libmachine: (bridge-221184)       <model type='virtio'/>
	I1007 13:59:19.449118  816252 main.go:141] libmachine: (bridge-221184)     </interface>
	I1007 13:59:19.449123  816252 main.go:141] libmachine: (bridge-221184)     <interface type='network'>
	I1007 13:59:19.449129  816252 main.go:141] libmachine: (bridge-221184)       <source network='default'/>
	I1007 13:59:19.449139  816252 main.go:141] libmachine: (bridge-221184)       <model type='virtio'/>
	I1007 13:59:19.449148  816252 main.go:141] libmachine: (bridge-221184)     </interface>
	I1007 13:59:19.449158  816252 main.go:141] libmachine: (bridge-221184)     <serial type='pty'>
	I1007 13:59:19.449167  816252 main.go:141] libmachine: (bridge-221184)       <target port='0'/>
	I1007 13:59:19.449191  816252 main.go:141] libmachine: (bridge-221184)     </serial>
	I1007 13:59:19.449203  816252 main.go:141] libmachine: (bridge-221184)     <console type='pty'>
	I1007 13:59:19.449211  816252 main.go:141] libmachine: (bridge-221184)       <target type='serial' port='0'/>
	I1007 13:59:19.449219  816252 main.go:141] libmachine: (bridge-221184)     </console>
	I1007 13:59:19.449229  816252 main.go:141] libmachine: (bridge-221184)     <rng model='virtio'>
	I1007 13:59:19.449240  816252 main.go:141] libmachine: (bridge-221184)       <backend model='random'>/dev/random</backend>
	I1007 13:59:19.449253  816252 main.go:141] libmachine: (bridge-221184)     </rng>
	I1007 13:59:19.449263  816252 main.go:141] libmachine: (bridge-221184)     
	I1007 13:59:19.449270  816252 main.go:141] libmachine: (bridge-221184)     
	I1007 13:59:19.449280  816252 main.go:141] libmachine: (bridge-221184)   </devices>
	I1007 13:59:19.449286  816252 main.go:141] libmachine: (bridge-221184) </domain>
	I1007 13:59:19.449297  816252 main.go:141] libmachine: (bridge-221184) 
	I1007 13:59:19.454477  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:05:22:53 in network default
	I1007 13:59:19.455417  816252 main.go:141] libmachine: (bridge-221184) Ensuring networks are active...
	I1007 13:59:19.455449  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:19.456412  816252 main.go:141] libmachine: (bridge-221184) Ensuring network default is active
	I1007 13:59:19.456768  816252 main.go:141] libmachine: (bridge-221184) Ensuring network mk-bridge-221184 is active
	I1007 13:59:19.457343  816252 main.go:141] libmachine: (bridge-221184) Getting domain xml...
	I1007 13:59:19.458230  816252 main.go:141] libmachine: (bridge-221184) Creating domain...
	I1007 13:59:19.846546  816252 main.go:141] libmachine: (bridge-221184) Waiting to get IP...
	I1007 13:59:19.847538  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:19.848330  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:19.848382  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:19.848218  816340 retry.go:31] will retry after 252.469598ms: waiting for machine to come up
	I1007 13:59:19.663464  814767 docker.go:233] disabling docker service ...
	I1007 13:59:19.663552  814767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:59:19.680586  814767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:59:19.696768  814767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:59:19.834669  814767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:59:19.960266  814767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:59:19.977397  814767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:59:19.999173  814767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:59:19.999260  814767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.012417  814767 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:59:20.012540  814767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.026548  814767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.038536  814767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.051037  814767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:59:20.064701  814767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.076999  814767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.096497  814767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:20.109279  814767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:59:20.121800  814767 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:59:20.121875  814767 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:59:20.137403  814767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:59:20.148935  814767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:59:20.268222  814767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:59:20.377215  814767 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:59:20.377309  814767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:59:20.384366  814767 start.go:563] Will wait 60s for crictl version
	I1007 13:59:20.384434  814767 ssh_runner.go:195] Run: which crictl
	I1007 13:59:20.389032  814767 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:59:20.440771  814767 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:59:20.440876  814767 ssh_runner.go:195] Run: crio --version
	I1007 13:59:20.473624  814767 ssh_runner.go:195] Run: crio --version
	I1007 13:59:20.509144  814767 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:59:19.624121  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:22.124814  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:20.510638  814767 main.go:141] libmachine: (flannel-221184) Calling .GetIP
	I1007 13:59:20.514111  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:20.514554  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:20.514576  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:20.514804  814767 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 13:59:20.519457  814767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:59:20.534552  814767 kubeadm.go:883] updating cluster {Name:flannel-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:flannel-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:59:20.534701  814767 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:59:20.534762  814767 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:59:20.571789  814767 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:59:20.571871  814767 ssh_runner.go:195] Run: which lz4
	I1007 13:59:20.576201  814767 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:59:20.580878  814767 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:59:20.580917  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:59:22.227222  814767 crio.go:462] duration metric: took 1.651060273s to copy over tarball
	I1007 13:59:22.227318  814767 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:59:20.102770  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:20.103332  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:20.103356  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:20.103301  816340 retry.go:31] will retry after 327.359948ms: waiting for machine to come up
	I1007 13:59:20.431985  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:20.432563  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:20.432621  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:20.432543  816340 retry.go:31] will retry after 306.117122ms: waiting for machine to come up
	I1007 13:59:20.740290  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:20.740813  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:20.740844  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:20.740771  816340 retry.go:31] will retry after 555.069721ms: waiting for machine to come up
	I1007 13:59:21.297225  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:21.297797  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:21.297827  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:21.297752  816340 retry.go:31] will retry after 581.662255ms: waiting for machine to come up
	I1007 13:59:21.880712  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:21.881349  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:21.881379  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:21.881290  816340 retry.go:31] will retry after 608.627431ms: waiting for machine to come up
	I1007 13:59:22.491298  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:22.492019  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:22.492048  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:22.491952  816340 retry.go:31] will retry after 1.167297286s: waiting for machine to come up
	I1007 13:59:23.661650  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:23.662246  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:23.662275  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:23.662211  816340 retry.go:31] will retry after 1.378324474s: waiting for machine to come up
	I1007 13:59:24.626115  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:27.497692  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:24.655901  814767 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.428536969s)
	I1007 13:59:24.655945  814767 crio.go:469] duration metric: took 2.428675006s to extract the tarball
	I1007 13:59:24.655956  814767 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:59:24.695880  814767 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:59:24.741652  814767 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:59:24.741685  814767 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:59:24.741708  814767 kubeadm.go:934] updating node { 192.168.39.119 8443 v1.31.1 crio true true} ...
	I1007 13:59:24.741834  814767 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-221184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:flannel-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1007 13:59:24.741917  814767 ssh_runner.go:195] Run: crio config
	I1007 13:59:24.789213  814767 cni.go:84] Creating CNI manager for "flannel"
	I1007 13:59:24.789239  814767 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:59:24.789266  814767 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.119 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-221184 NodeName:flannel-221184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:59:24.789428  814767 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-221184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:59:24.789508  814767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:59:24.801059  814767 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:59:24.801149  814767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:59:24.811334  814767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1007 13:59:24.830622  814767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:59:24.851863  814767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1007 13:59:24.872977  814767 ssh_runner.go:195] Run: grep 192.168.39.119	control-plane.minikube.internal$ /etc/hosts
	I1007 13:59:24.877469  814767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:59:24.892849  814767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:59:25.031416  814767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:59:25.055400  814767 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184 for IP: 192.168.39.119
	I1007 13:59:25.055426  814767 certs.go:194] generating shared ca certs ...
	I1007 13:59:25.055442  814767 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.055626  814767 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:59:25.055674  814767 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:59:25.055687  814767 certs.go:256] generating profile certs ...
	I1007 13:59:25.055761  814767 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/client.key
	I1007 13:59:25.055779  814767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/client.crt with IP's: []
	I1007 13:59:25.239132  814767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/client.crt ...
	I1007 13:59:25.239168  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/client.crt: {Name:mk158506cdb9ade6b3d9b35607b9af9d0b09ad4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.239359  814767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/client.key ...
	I1007 13:59:25.239371  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/client.key: {Name:mk38a65a048f7b4327742c488fcc2aa26294922f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.239465  814767 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.key.c436ca4a
	I1007 13:59:25.239484  814767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.crt.c436ca4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.119]
	I1007 13:59:25.490768  814767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.crt.c436ca4a ...
	I1007 13:59:25.490806  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.crt.c436ca4a: {Name:mk2ef0e23c0c3aac4d09feb0892f2f0b2102b5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.491015  814767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.key.c436ca4a ...
	I1007 13:59:25.491036  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.key.c436ca4a: {Name:mkc11cae1dc3b32c7601a06533b5913841976d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.491165  814767 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.crt.c436ca4a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.crt
	I1007 13:59:25.491286  814767 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.key.c436ca4a -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.key
	I1007 13:59:25.491377  814767 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.key
	I1007 13:59:25.491401  814767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.crt with IP's: []
	I1007 13:59:25.636500  814767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.crt ...
	I1007 13:59:25.636537  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.crt: {Name:mk0a4da163da1f803e0c92e42ab93b853546b692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.636733  814767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.key ...
	I1007 13:59:25.636751  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.key: {Name:mk310e8d7ee28c8fd6cdc0e8396cd62c7d889d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:25.636971  814767 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:59:25.637027  814767 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:59:25.637050  814767 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:59:25.637082  814767 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:59:25.637115  814767 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:59:25.637147  814767 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:59:25.637199  814767 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:59:25.637845  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:59:25.677852  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:59:25.707498  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:59:25.738220  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:59:25.785811  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 13:59:25.832443  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:59:25.858706  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:59:25.886711  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/flannel-221184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:59:25.914864  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:59:25.942008  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:59:25.969361  814767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:59:25.995823  814767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:59:26.018486  814767 ssh_runner.go:195] Run: openssl version
	I1007 13:59:26.026683  814767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:59:26.041190  814767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:59:26.046440  814767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:59:26.046518  814767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:59:26.053255  814767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:59:26.067669  814767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:59:26.080931  814767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:59:26.087674  814767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:59:26.087761  814767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:59:26.094156  814767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:59:26.107078  814767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:59:26.119989  814767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:59:26.125415  814767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:59:26.125503  814767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:59:26.132548  814767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:59:26.145070  814767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:59:26.149699  814767 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:59:26.149775  814767 kubeadm.go:392] StartCluster: {Name:flannel-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:flannel-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:59:26.149867  814767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:59:26.149941  814767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:59:26.197011  814767 cri.go:89] found id: ""
	I1007 13:59:26.197109  814767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:59:26.210012  814767 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:59:26.222377  814767 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:59:26.234882  814767 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:59:26.234906  814767 kubeadm.go:157] found existing configuration files:
	
	I1007 13:59:26.234998  814767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:59:26.249114  814767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:59:26.249181  814767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:59:26.263329  814767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:59:26.277072  814767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:59:26.277180  814767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:59:26.292475  814767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:59:26.307154  814767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:59:26.307254  814767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:59:26.323195  814767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:59:26.337509  814767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:59:26.337589  814767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:59:26.352570  814767 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:59:26.408916  814767 kubeadm.go:310] W1007 13:59:26.388326     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:59:26.409929  814767 kubeadm.go:310] W1007 13:59:26.389652     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:59:26.557596  814767 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:59:25.042739  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:25.043175  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:25.043207  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:25.043130  816340 retry.go:31] will retry after 1.7871973s: waiting for machine to come up
	I1007 13:59:26.832090  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:26.832673  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:26.832725  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:26.832616  816340 retry.go:31] will retry after 1.700556396s: waiting for machine to come up
	I1007 13:59:28.535343  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:28.535789  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:28.535823  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:28.535739  816340 retry.go:31] will retry after 2.79123326s: waiting for machine to come up
	I1007 13:59:29.622502  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:32.122774  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:31.329289  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:31.329830  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:31.329860  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:31.329775  816340 retry.go:31] will retry after 2.725961553s: waiting for machine to come up
	I1007 13:59:34.057178  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:34.057783  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:34.057815  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:34.057742  816340 retry.go:31] will retry after 3.215711872s: waiting for machine to come up
	I1007 13:59:37.566543  814767 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:59:37.566626  814767 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:59:37.566736  814767 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:59:37.566857  814767 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:59:37.567006  814767 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:59:37.567085  814767 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:59:37.569125  814767 out.go:235]   - Generating certificates and keys ...
	I1007 13:59:37.569235  814767 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:59:37.569319  814767 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:59:37.569410  814767 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:59:37.569486  814767 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:59:37.569564  814767 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:59:37.569635  814767 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:59:37.569735  814767 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:59:37.569908  814767 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-221184 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	I1007 13:59:37.569991  814767 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:59:37.570164  814767 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-221184 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	I1007 13:59:37.570250  814767 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:59:37.570330  814767 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:59:37.570375  814767 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:59:37.570424  814767 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:59:37.570465  814767 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:59:37.570530  814767 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:59:37.570572  814767 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:59:37.570644  814767 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:59:37.570719  814767 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:59:37.570798  814767 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:59:37.570861  814767 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:59:37.572171  814767 out.go:235]   - Booting up control plane ...
	I1007 13:59:37.572268  814767 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:59:37.572356  814767 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:59:37.572460  814767 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:59:37.572625  814767 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:59:37.572739  814767 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:59:37.572796  814767 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:59:37.572910  814767 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:59:37.572999  814767 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:59:37.573078  814767 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002243736s
	I1007 13:59:37.573152  814767 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:59:37.573224  814767 kubeadm.go:310] [api-check] The API server is healthy after 5.503453735s
	I1007 13:59:37.573377  814767 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:59:37.573553  814767 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:59:37.573646  814767 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:59:37.573862  814767 kubeadm.go:310] [mark-control-plane] Marking the node flannel-221184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:59:37.573917  814767 kubeadm.go:310] [bootstrap-token] Using token: gki7fl.19yqfkqwhauflvxy
	I1007 13:59:37.575432  814767 out.go:235]   - Configuring RBAC rules ...
	I1007 13:59:37.575529  814767 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:59:37.575641  814767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:59:37.575776  814767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:59:37.575902  814767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:59:37.576006  814767 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:59:37.576086  814767 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:59:37.576200  814767 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:59:37.576272  814767 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:59:37.576327  814767 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:59:37.576336  814767 kubeadm.go:310] 
	I1007 13:59:37.576421  814767 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:59:37.576433  814767 kubeadm.go:310] 
	I1007 13:59:37.576560  814767 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:59:37.576573  814767 kubeadm.go:310] 
	I1007 13:59:37.576607  814767 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:59:37.576694  814767 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:59:37.576765  814767 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:59:37.576780  814767 kubeadm.go:310] 
	I1007 13:59:37.576852  814767 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:59:37.576867  814767 kubeadm.go:310] 
	I1007 13:59:37.576930  814767 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:59:37.576940  814767 kubeadm.go:310] 
	I1007 13:59:37.577027  814767 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:59:37.577120  814767 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:59:37.577231  814767 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:59:37.577242  814767 kubeadm.go:310] 
	I1007 13:59:37.577330  814767 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:59:37.577422  814767 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:59:37.577432  814767 kubeadm.go:310] 
	I1007 13:59:37.577523  814767 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gki7fl.19yqfkqwhauflvxy \
	I1007 13:59:37.577644  814767 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 13:59:37.577673  814767 kubeadm.go:310] 	--control-plane 
	I1007 13:59:37.577681  814767 kubeadm.go:310] 
	I1007 13:59:37.577783  814767 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:59:37.577796  814767 kubeadm.go:310] 
	I1007 13:59:37.577897  814767 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gki7fl.19yqfkqwhauflvxy \
	I1007 13:59:37.578069  814767 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 13:59:37.578089  814767 cni.go:84] Creating CNI manager for "flannel"
	I1007 13:59:37.579655  814767 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1007 13:59:34.123954  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:36.622714  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:37.580934  814767 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 13:59:37.587957  814767 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 13:59:37.587987  814767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1007 13:59:37.609759  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 13:59:38.076570  814767 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:59:38.076682  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:38.076724  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-221184 minikube.k8s.io/updated_at=2024_10_07T13_59_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=flannel-221184 minikube.k8s.io/primary=true
	I1007 13:59:38.272027  814767 ops.go:34] apiserver oom_adj: -16
	I1007 13:59:38.272185  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:38.773006  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:39.273054  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:37.275166  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:37.275711  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find current IP address of domain bridge-221184 in network mk-bridge-221184
	I1007 13:59:37.275730  816252 main.go:141] libmachine: (bridge-221184) DBG | I1007 13:59:37.275671  816340 retry.go:31] will retry after 4.150920952s: waiting for machine to come up
	I1007 13:59:39.772334  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:40.272632  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:40.772509  814767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:59:40.891196  814767 kubeadm.go:1113] duration metric: took 2.814587987s to wait for elevateKubeSystemPrivileges
	I1007 13:59:40.891245  814767 kubeadm.go:394] duration metric: took 14.741476831s to StartCluster
	I1007 13:59:40.891274  814767 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:40.891366  814767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:59:40.894413  814767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:40.895041  814767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:59:40.895057  814767 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:59:40.895128  814767 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:59:40.895239  814767 addons.go:69] Setting storage-provisioner=true in profile "flannel-221184"
	I1007 13:59:40.895261  814767 addons.go:234] Setting addon storage-provisioner=true in "flannel-221184"
	I1007 13:59:40.895260  814767 addons.go:69] Setting default-storageclass=true in profile "flannel-221184"
	I1007 13:59:40.895289  814767 config.go:182] Loaded profile config "flannel-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:59:40.895298  814767 host.go:66] Checking if "flannel-221184" exists ...
	I1007 13:59:40.895301  814767 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-221184"
	I1007 13:59:40.895866  814767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:59:40.895920  814767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:59:40.896073  814767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:59:40.896126  814767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:59:40.897183  814767 out.go:177] * Verifying Kubernetes components...
	I1007 13:59:40.898579  814767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:59:40.912880  814767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43609
	I1007 13:59:40.913443  814767 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:59:40.914085  814767 main.go:141] libmachine: Using API Version  1
	I1007 13:59:40.914117  814767 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:59:40.914466  814767 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:59:40.914704  814767 main.go:141] libmachine: (flannel-221184) Calling .GetState
	I1007 13:59:40.916792  814767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45505
	I1007 13:59:40.917218  814767 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:59:40.917670  814767 main.go:141] libmachine: Using API Version  1
	I1007 13:59:40.917692  814767 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:59:40.918096  814767 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:59:40.918573  814767 addons.go:234] Setting addon default-storageclass=true in "flannel-221184"
	I1007 13:59:40.918622  814767 host.go:66] Checking if "flannel-221184" exists ...
	I1007 13:59:40.918774  814767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:59:40.918828  814767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:59:40.918968  814767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:59:40.919006  814767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:59:40.935397  814767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I1007 13:59:40.935929  814767 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:59:40.936473  814767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1007 13:59:40.936582  814767 main.go:141] libmachine: Using API Version  1
	I1007 13:59:40.936606  814767 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:59:40.936981  814767 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:59:40.937540  814767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:59:40.937587  814767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:59:40.938525  814767 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:59:40.939114  814767 main.go:141] libmachine: Using API Version  1
	I1007 13:59:40.939145  814767 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:59:40.939579  814767 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:59:40.939768  814767 main.go:141] libmachine: (flannel-221184) Calling .GetState
	I1007 13:59:40.943541  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:40.945584  814767 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:59:39.122517  812940 pod_ready.go:103] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:40.626722  812940 pod_ready.go:93] pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace has status "Ready":"True"
	I1007 13:59:40.626753  812940 pod_ready.go:82] duration metric: took 25.011880603s for pod "coredns-7c65d6cfc9-tfl8g" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.626765  812940 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.637801  812940 pod_ready.go:93] pod "etcd-enable-default-cni-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:59:40.637831  812940 pod_ready.go:82] duration metric: took 11.057885ms for pod "etcd-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.637848  812940 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.648014  812940 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:59:40.648049  812940 pod_ready.go:82] duration metric: took 10.191376ms for pod "kube-apiserver-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.648063  812940 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.657551  812940 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:59:40.657581  812940 pod_ready.go:82] duration metric: took 9.508339ms for pod "kube-controller-manager-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.657597  812940 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-wx5lh" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.666281  812940 pod_ready.go:93] pod "kube-proxy-wx5lh" in "kube-system" namespace has status "Ready":"True"
	I1007 13:59:40.666310  812940 pod_ready.go:82] duration metric: took 8.706284ms for pod "kube-proxy-wx5lh" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:40.666320  812940 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:41.020838  812940 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 13:59:41.020869  812940 pod_ready.go:82] duration metric: took 354.541139ms for pod "kube-scheduler-enable-default-cni-221184" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:41.020878  812940 pod_ready.go:39] duration metric: took 36.424891948s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:59:41.020900  812940 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:59:41.020962  812940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:59:41.044071  812940 api_server.go:72] duration metric: took 37.701973498s to wait for apiserver process to appear ...
	I1007 13:59:41.044101  812940 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:59:41.044129  812940 api_server.go:253] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I1007 13:59:41.051454  812940 api_server.go:279] https://192.168.50.226:8443/healthz returned 200:
	ok
	I1007 13:59:41.052726  812940 api_server.go:141] control plane version: v1.31.1
	I1007 13:59:41.052756  812940 api_server.go:131] duration metric: took 8.64818ms to wait for apiserver health ...
	I1007 13:59:41.052765  812940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:59:41.222863  812940 system_pods.go:59] 7 kube-system pods found
	I1007 13:59:41.222899  812940 system_pods.go:61] "coredns-7c65d6cfc9-tfl8g" [80c81746-c389-4856-83ab-2028d7c3698e] Running
	I1007 13:59:41.222905  812940 system_pods.go:61] "etcd-enable-default-cni-221184" [8d99c36d-aeba-4c02-924a-cef00970ccbf] Running
	I1007 13:59:41.222911  812940 system_pods.go:61] "kube-apiserver-enable-default-cni-221184" [1234daaa-649f-40c1-b56d-0967da3abab6] Running
	I1007 13:59:41.222914  812940 system_pods.go:61] "kube-controller-manager-enable-default-cni-221184" [0b374611-b7c2-4a35-b328-b68bd09c020a] Running
	I1007 13:59:41.222917  812940 system_pods.go:61] "kube-proxy-wx5lh" [a50d93a4-7c74-4c29-87f1-319602197bfb] Running
	I1007 13:59:41.222921  812940 system_pods.go:61] "kube-scheduler-enable-default-cni-221184" [044ad0de-fbba-460e-b53d-80f69f24b441] Running
	I1007 13:59:41.222924  812940 system_pods.go:61] "storage-provisioner" [8118aa10-6f21-44bb-89db-d918f555da59] Running
	I1007 13:59:41.222930  812940 system_pods.go:74] duration metric: took 170.159465ms to wait for pod list to return data ...
	I1007 13:59:41.222940  812940 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:59:41.420218  812940 default_sa.go:45] found service account: "default"
	I1007 13:59:41.420255  812940 default_sa.go:55] duration metric: took 197.307092ms for default service account to be created ...
	I1007 13:59:41.420270  812940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:59:41.622422  812940 system_pods.go:86] 7 kube-system pods found
	I1007 13:59:41.622457  812940 system_pods.go:89] "coredns-7c65d6cfc9-tfl8g" [80c81746-c389-4856-83ab-2028d7c3698e] Running
	I1007 13:59:41.622463  812940 system_pods.go:89] "etcd-enable-default-cni-221184" [8d99c36d-aeba-4c02-924a-cef00970ccbf] Running
	I1007 13:59:41.622468  812940 system_pods.go:89] "kube-apiserver-enable-default-cni-221184" [1234daaa-649f-40c1-b56d-0967da3abab6] Running
	I1007 13:59:41.622472  812940 system_pods.go:89] "kube-controller-manager-enable-default-cni-221184" [0b374611-b7c2-4a35-b328-b68bd09c020a] Running
	I1007 13:59:41.622475  812940 system_pods.go:89] "kube-proxy-wx5lh" [a50d93a4-7c74-4c29-87f1-319602197bfb] Running
	I1007 13:59:41.622478  812940 system_pods.go:89] "kube-scheduler-enable-default-cni-221184" [044ad0de-fbba-460e-b53d-80f69f24b441] Running
	I1007 13:59:41.622482  812940 system_pods.go:89] "storage-provisioner" [8118aa10-6f21-44bb-89db-d918f555da59] Running
	I1007 13:59:41.622489  812940 system_pods.go:126] duration metric: took 202.21227ms to wait for k8s-apps to be running ...
	I1007 13:59:41.622500  812940 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:59:41.622557  812940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:59:41.645480  812940 system_svc.go:56] duration metric: took 22.967487ms WaitForService to wait for kubelet
	I1007 13:59:41.645514  812940 kubeadm.go:582] duration metric: took 38.303421735s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:59:41.645540  812940 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:59:41.821507  812940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 13:59:41.821541  812940 node_conditions.go:123] node cpu capacity is 2
	I1007 13:59:41.821559  812940 node_conditions.go:105] duration metric: took 176.01336ms to run NodePressure ...
	I1007 13:59:41.821574  812940 start.go:241] waiting for startup goroutines ...
	I1007 13:59:41.821584  812940 start.go:246] waiting for cluster config update ...
	I1007 13:59:41.821599  812940 start.go:255] writing updated cluster config ...
	I1007 13:59:41.821836  812940 ssh_runner.go:195] Run: rm -f paused
	I1007 13:59:41.887281  812940 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:59:41.888819  812940 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-221184" cluster and "default" namespace by default
	I1007 13:59:40.946952  814767 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:59:40.946975  814767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:59:40.946999  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:40.951002  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:40.951503  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:40.951536  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:40.951817  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:40.952020  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:40.952190  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:40.952308  814767 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa Username:docker}
	I1007 13:59:40.956226  814767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40019
	I1007 13:59:40.956741  814767 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:59:40.957368  814767 main.go:141] libmachine: Using API Version  1
	I1007 13:59:40.957395  814767 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:59:40.957722  814767 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:59:40.957921  814767 main.go:141] libmachine: (flannel-221184) Calling .GetState
	I1007 13:59:40.959889  814767 main.go:141] libmachine: (flannel-221184) Calling .DriverName
	I1007 13:59:40.960136  814767 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:59:40.960154  814767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:59:40.960181  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHHostname
	I1007 13:59:40.963412  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:40.963833  814767 main.go:141] libmachine: (flannel-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:cc:79", ip: ""} in network mk-flannel-221184: {Iface:virbr3 ExpiryTime:2024-10-07 14:59:10 +0000 UTC Type:0 Mac:52:54:00:36:cc:79 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:flannel-221184 Clientid:01:52:54:00:36:cc:79}
	I1007 13:59:40.963855  814767 main.go:141] libmachine: (flannel-221184) DBG | domain flannel-221184 has defined IP address 192.168.39.119 and MAC address 52:54:00:36:cc:79 in network mk-flannel-221184
	I1007 13:59:40.964029  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHPort
	I1007 13:59:40.965102  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHKeyPath
	I1007 13:59:40.965352  814767 main.go:141] libmachine: (flannel-221184) Calling .GetSSHUsername
	I1007 13:59:40.965464  814767 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/flannel-221184/id_rsa Username:docker}
	I1007 13:59:41.108649  814767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:59:41.115474  814767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:59:41.223426  814767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:59:41.314719  814767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:59:41.506115  814767 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 13:59:41.507620  814767 node_ready.go:35] waiting up to 15m0s for node "flannel-221184" to be "Ready" ...
	I1007 13:59:42.016628  814767 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-221184" context rescaled to 1 replicas
	I1007 13:59:42.033623  814767 main.go:141] libmachine: Making call to close driver server
	I1007 13:59:42.033657  814767 main.go:141] libmachine: (flannel-221184) Calling .Close
	I1007 13:59:42.033759  814767 main.go:141] libmachine: Making call to close driver server
	I1007 13:59:42.033780  814767 main.go:141] libmachine: (flannel-221184) Calling .Close
	I1007 13:59:42.034006  814767 main.go:141] libmachine: (flannel-221184) DBG | Closing plugin on server side
	I1007 13:59:42.034058  814767 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:59:42.034064  814767 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:59:42.034072  814767 main.go:141] libmachine: Making call to close driver server
	I1007 13:59:42.034085  814767 main.go:141] libmachine: (flannel-221184) Calling .Close
	I1007 13:59:42.034171  814767 main.go:141] libmachine: (flannel-221184) DBG | Closing plugin on server side
	I1007 13:59:42.034259  814767 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:59:42.034278  814767 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:59:42.034291  814767 main.go:141] libmachine: Making call to close driver server
	I1007 13:59:42.034299  814767 main.go:141] libmachine: (flannel-221184) Calling .Close
	I1007 13:59:42.034470  814767 main.go:141] libmachine: (flannel-221184) DBG | Closing plugin on server side
	I1007 13:59:42.034493  814767 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:59:42.034499  814767 main.go:141] libmachine: (flannel-221184) DBG | Closing plugin on server side
	I1007 13:59:42.034504  814767 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:59:42.036494  814767 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:59:42.036520  814767 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:59:42.065785  814767 main.go:141] libmachine: Making call to close driver server
	I1007 13:59:42.065825  814767 main.go:141] libmachine: (flannel-221184) Calling .Close
	I1007 13:59:42.066239  814767 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:59:42.066261  814767 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:59:42.066245  814767 main.go:141] libmachine: (flannel-221184) DBG | Closing plugin on server side
	I1007 13:59:42.068483  814767 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 13:59:42.069787  814767 addons.go:510] duration metric: took 1.174664678s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 13:59:43.512405  814767 node_ready.go:53] node "flannel-221184" has status "Ready":"False"
	I1007 13:59:41.428340  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.428787  816252 main.go:141] libmachine: (bridge-221184) Found IP for machine: 192.168.72.247
	I1007 13:59:41.428814  816252 main.go:141] libmachine: (bridge-221184) Reserving static IP address...
	I1007 13:59:41.428829  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has current primary IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.429303  816252 main.go:141] libmachine: (bridge-221184) DBG | unable to find host DHCP lease matching {name: "bridge-221184", mac: "52:54:00:a1:01:5f", ip: "192.168.72.247"} in network mk-bridge-221184
	I1007 13:59:41.528351  816252 main.go:141] libmachine: (bridge-221184) DBG | Getting to WaitForSSH function...
	I1007 13:59:41.528386  816252 main.go:141] libmachine: (bridge-221184) Reserved static IP address: 192.168.72.247
	I1007 13:59:41.528400  816252 main.go:141] libmachine: (bridge-221184) Waiting for SSH to be available...
	I1007 13:59:41.532167  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.532738  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:41.532764  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.533154  816252 main.go:141] libmachine: (bridge-221184) DBG | Using SSH client type: external
	I1007 13:59:41.533186  816252 main.go:141] libmachine: (bridge-221184) DBG | Using SSH private key: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa (-rw-------)
	I1007 13:59:41.533235  816252 main.go:141] libmachine: (bridge-221184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:59:41.533245  816252 main.go:141] libmachine: (bridge-221184) DBG | About to run SSH command:
	I1007 13:59:41.533256  816252 main.go:141] libmachine: (bridge-221184) DBG | exit 0
	I1007 13:59:41.663470  816252 main.go:141] libmachine: (bridge-221184) DBG | SSH cmd err, output: <nil>: 
	I1007 13:59:41.663818  816252 main.go:141] libmachine: (bridge-221184) KVM machine creation complete!
	I1007 13:59:41.664179  816252 main.go:141] libmachine: (bridge-221184) Calling .GetConfigRaw
	I1007 13:59:41.664829  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:41.665052  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:41.665252  816252 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:59:41.665267  816252 main.go:141] libmachine: (bridge-221184) Calling .GetState
	I1007 13:59:41.666806  816252 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:59:41.666825  816252 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:59:41.666847  816252 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:59:41.666855  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:41.670054  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.670558  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:41.670588  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.670804  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:41.671032  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:41.671214  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:41.671365  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:41.671534  816252 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:41.671800  816252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.247 22 <nil> <nil>}
	I1007 13:59:41.671818  816252 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:59:41.790625  816252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:59:41.790653  816252 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:59:41.790663  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:41.794038  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.794456  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:41.794503  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.794861  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:41.795087  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:41.795291  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:41.795454  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:41.795639  816252 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:41.795923  816252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.247 22 <nil> <nil>}
	I1007 13:59:41.795942  816252 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:59:41.920405  816252 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:59:41.920520  816252 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:59:41.920533  816252 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:59:41.920545  816252 main.go:141] libmachine: (bridge-221184) Calling .GetMachineName
	I1007 13:59:41.920885  816252 buildroot.go:166] provisioning hostname "bridge-221184"
	I1007 13:59:41.920911  816252 main.go:141] libmachine: (bridge-221184) Calling .GetMachineName
	I1007 13:59:41.921150  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:41.924996  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.925488  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:41.925519  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:41.925738  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:41.925968  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:41.926161  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:41.926292  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:41.926496  816252 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:41.926736  816252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.247 22 <nil> <nil>}
	I1007 13:59:41.926763  816252 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-221184 && echo "bridge-221184" | sudo tee /etc/hostname
	I1007 13:59:42.064291  816252 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-221184
	
	I1007 13:59:42.064324  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:42.070675  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.071115  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.071153  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.071545  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:42.071732  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.071894  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.071995  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:42.072130  816252 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:42.072341  816252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.247 22 <nil> <nil>}
	I1007 13:59:42.072366  816252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-221184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-221184/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-221184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:59:42.198443  816252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:59:42.198479  816252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18424-747025/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-747025/.minikube}
	I1007 13:59:42.198537  816252 buildroot.go:174] setting up certificates
	I1007 13:59:42.198554  816252 provision.go:84] configureAuth start
	I1007 13:59:42.198571  816252 main.go:141] libmachine: (bridge-221184) Calling .GetMachineName
	I1007 13:59:42.198863  816252 main.go:141] libmachine: (bridge-221184) Calling .GetIP
	I1007 13:59:42.201557  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.201961  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.201993  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.202195  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:42.204840  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.205125  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.205157  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.205293  816252 provision.go:143] copyHostCerts
	I1007 13:59:42.205356  816252 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem, removing ...
	I1007 13:59:42.205385  816252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem
	I1007 13:59:42.205446  816252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/ca.pem (1082 bytes)
	I1007 13:59:42.205588  816252 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem, removing ...
	I1007 13:59:42.205595  816252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem
	I1007 13:59:42.205616  816252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/cert.pem (1123 bytes)
	I1007 13:59:42.205704  816252 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem, removing ...
	I1007 13:59:42.205711  816252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem
	I1007 13:59:42.205746  816252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-747025/.minikube/key.pem (1675 bytes)
	I1007 13:59:42.205833  816252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem org=jenkins.bridge-221184 san=[127.0.0.1 192.168.72.247 bridge-221184 localhost minikube]
	I1007 13:59:42.304550  816252 provision.go:177] copyRemoteCerts
	I1007 13:59:42.304639  816252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:59:42.304681  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:42.307574  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.308004  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.308039  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.308378  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:42.308594  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.308782  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:42.308969  816252 sshutil.go:53] new ssh client: &{IP:192.168.72.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa Username:docker}
	I1007 13:59:42.398154  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:59:42.432911  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 13:59:42.464891  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:59:42.493113  816252 provision.go:87] duration metric: took 294.539164ms to configureAuth
	I1007 13:59:42.493155  816252 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:59:42.493380  816252 config.go:182] Loaded profile config "bridge-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:59:42.493485  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:42.496224  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.496615  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.496646  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.496947  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:42.497164  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.497353  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.497475  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:42.497644  816252 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:42.497896  816252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.247 22 <nil> <nil>}
	I1007 13:59:42.497921  816252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:59:42.768228  816252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:59:42.768264  816252 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:59:42.768275  816252 main.go:141] libmachine: (bridge-221184) Calling .GetURL
	I1007 13:59:42.769728  816252 main.go:141] libmachine: (bridge-221184) DBG | Using libvirt version 6000000
	I1007 13:59:42.772481  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.772917  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.772951  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.773103  816252 main.go:141] libmachine: Docker is up and running!
	I1007 13:59:42.773123  816252 main.go:141] libmachine: Reticulating splines...
	I1007 13:59:42.773132  816252 client.go:171] duration metric: took 23.819024313s to LocalClient.Create
	I1007 13:59:42.773160  816252 start.go:167] duration metric: took 23.819150355s to libmachine.API.Create "bridge-221184"
	I1007 13:59:42.773189  816252 start.go:293] postStartSetup for "bridge-221184" (driver="kvm2")
	I1007 13:59:42.773206  816252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:59:42.773233  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:42.773481  816252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:59:42.773512  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:42.776187  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.776610  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.776646  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.776816  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:42.776998  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.777205  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:42.777430  816252 sshutil.go:53] new ssh client: &{IP:192.168.72.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa Username:docker}
	I1007 13:59:42.875806  816252 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:59:42.881030  816252 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:59:42.881074  816252 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/addons for local assets ...
	I1007 13:59:42.881162  816252 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-747025/.minikube/files for local assets ...
	I1007 13:59:42.881267  816252 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem -> 7543242.pem in /etc/ssl/certs
	I1007 13:59:42.881398  816252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:59:42.894625  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:59:42.929105  816252 start.go:296] duration metric: took 155.894338ms for postStartSetup
	I1007 13:59:42.929176  816252 main.go:141] libmachine: (bridge-221184) Calling .GetConfigRaw
	I1007 13:59:42.929992  816252 main.go:141] libmachine: (bridge-221184) Calling .GetIP
	I1007 13:59:42.933271  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.933620  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.933652  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.934091  816252 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/config.json ...
	I1007 13:59:42.934321  816252 start.go:128] duration metric: took 24.002549041s to createHost
	I1007 13:59:42.934348  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:42.937315  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.937636  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:42.937665  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:42.937834  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:42.938062  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.938261  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:42.938429  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:42.938668  816252 main.go:141] libmachine: Using SSH client type: native
	I1007 13:59:42.938874  816252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.247 22 <nil> <nil>}
	I1007 13:59:42.938887  816252 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:59:43.059512  816252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728309583.011840263
	
	I1007 13:59:43.059543  816252 fix.go:216] guest clock: 1728309583.011840263
	I1007 13:59:43.059553  816252 fix.go:229] Guest: 2024-10-07 13:59:43.011840263 +0000 UTC Remote: 2024-10-07 13:59:42.934336298 +0000 UTC m=+33.025375169 (delta=77.503965ms)
	I1007 13:59:43.059577  816252 fix.go:200] guest clock delta is within tolerance: 77.503965ms
	I1007 13:59:43.059583  816252 start.go:83] releasing machines lock for "bridge-221184", held for 24.127990218s
	I1007 13:59:43.059604  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:43.059930  816252 main.go:141] libmachine: (bridge-221184) Calling .GetIP
	I1007 13:59:43.063361  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:43.063889  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:43.063918  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:43.064095  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:43.064763  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:43.065057  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 13:59:43.065173  816252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:59:43.065221  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:43.065421  816252 ssh_runner.go:195] Run: cat /version.json
	I1007 13:59:43.065493  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 13:59:43.068525  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:43.068719  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:43.068928  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:43.068961  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:43.069160  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:43.069290  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:43.069324  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:43.069367  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:43.069504  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 13:59:43.069657  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 13:59:43.069661  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:43.069887  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 13:59:43.069883  816252 sshutil.go:53] new ssh client: &{IP:192.168.72.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa Username:docker}
	I1007 13:59:43.070081  816252 sshutil.go:53] new ssh client: &{IP:192.168.72.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa Username:docker}
	I1007 13:59:43.185311  816252 ssh_runner.go:195] Run: systemctl --version
	I1007 13:59:43.193784  816252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:59:43.361528  816252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:59:43.370251  816252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:59:43.370336  816252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:59:43.390742  816252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:59:43.390775  816252 start.go:495] detecting cgroup driver to use...
	I1007 13:59:43.390858  816252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:59:43.413278  816252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:59:43.436104  816252 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:59:43.436182  816252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:59:43.454924  816252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:59:43.475267  816252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:59:43.644659  816252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:59:43.866476  816252 docker.go:233] disabling docker service ...
	I1007 13:59:43.866559  816252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:59:43.887010  816252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:59:43.906386  816252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:59:44.066882  816252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:59:44.228594  816252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:59:44.245483  816252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:59:44.269842  816252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:59:44.269912  816252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.283308  816252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:59:44.283392  816252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.295787  816252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.311797  816252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.324069  816252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:59:44.336840  816252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.349850  816252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.372241  816252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:59:44.385456  816252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:59:44.399921  816252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:59:44.400010  816252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:59:44.418339  816252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:59:44.430508  816252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:59:44.606681  816252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:59:44.718505  816252 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:59:44.718681  816252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:59:44.725281  816252 start.go:563] Will wait 60s for crictl version
	I1007 13:59:44.725353  816252 ssh_runner.go:195] Run: which crictl
	I1007 13:59:44.730895  816252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:59:44.784238  816252 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:59:44.784361  816252 ssh_runner.go:195] Run: crio --version
	I1007 13:59:44.821082  816252 ssh_runner.go:195] Run: crio --version
	I1007 13:59:44.870490  816252 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:59:44.871716  816252 main.go:141] libmachine: (bridge-221184) Calling .GetIP
	I1007 13:59:44.875585  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:44.876223  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 13:59:44.876352  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 13:59:44.876754  816252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 13:59:44.883778  816252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:59:44.903531  816252 kubeadm.go:883] updating cluster {Name:bridge-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:bridge-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.247 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:59:44.903688  816252 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:59:44.903773  816252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:59:44.942910  816252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:59:44.942984  816252 ssh_runner.go:195] Run: which lz4
	I1007 13:59:44.947522  816252 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:59:46.015354  814767 node_ready.go:53] node "flannel-221184" has status "Ready":"False"
	I1007 13:59:48.011678  814767 node_ready.go:49] node "flannel-221184" has status "Ready":"True"
	I1007 13:59:48.011713  814767 node_ready.go:38] duration metric: took 6.504061132s for node "flannel-221184" to be "Ready" ...
	I1007 13:59:48.011728  814767 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:59:48.019921  814767 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace to be "Ready" ...
	I1007 13:59:44.953330  816252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:59:44.953373  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:59:46.708312  816252 crio.go:462] duration metric: took 1.760844136s to copy over tarball
	I1007 13:59:46.708432  816252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:59:49.244622  816252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.536154647s)
	I1007 13:59:49.244658  816252 crio.go:469] duration metric: took 2.536303696s to extract the tarball
	I1007 13:59:49.244669  816252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:59:49.283659  816252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:59:49.331755  816252 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:59:49.331785  816252 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:59:49.331794  816252 kubeadm.go:934] updating node { 192.168.72.247 8443 v1.31.1 crio true true} ...
	I1007 13:59:49.331919  816252 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-221184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:bridge-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1007 13:59:49.331995  816252 ssh_runner.go:195] Run: crio config
	I1007 13:59:49.385343  816252 cni.go:84] Creating CNI manager for "bridge"
	I1007 13:59:49.385372  816252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:59:49.385394  816252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.247 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-221184 NodeName:bridge-221184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:59:49.385571  816252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-221184"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:59:49.385657  816252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:59:49.397064  816252 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:59:49.397146  816252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:59:49.408180  816252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 13:59:49.433242  816252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:59:49.452794  816252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 13:59:49.471688  816252 ssh_runner.go:195] Run: grep 192.168.72.247	control-plane.minikube.internal$ /etc/hosts
	I1007 13:59:49.476104  816252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:59:49.490057  816252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:59:49.628990  816252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:59:49.647713  816252 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184 for IP: 192.168.72.247
	I1007 13:59:49.647737  816252 certs.go:194] generating shared ca certs ...
	I1007 13:59:49.647753  816252 certs.go:226] acquiring lock for ca certs: {Name:mk6ca7c28f38fbb86128c70ce573f05386aa0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:49.647920  816252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key
	I1007 13:59:49.647968  816252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key
	I1007 13:59:49.647978  816252 certs.go:256] generating profile certs ...
	I1007 13:59:49.648041  816252 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/client.key
	I1007 13:59:49.648055  816252 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/client.crt with IP's: []
	I1007 13:59:49.719749  816252 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/client.crt ...
	I1007 13:59:49.719799  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/client.crt: {Name:mk6e1a20e57ba0bb75a1b8e5da53e551284f7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:49.720147  816252 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/client.key ...
	I1007 13:59:49.720196  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/client.key: {Name:mk8b972397aecba5ec13e29add0ad1d243d66d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:49.720378  816252 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.key.6fa4a90d
	I1007 13:59:49.720413  816252 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.crt.6fa4a90d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.247]
	I1007 13:59:50.028539  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:52.529387  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:50.117070  816252 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.crt.6fa4a90d ...
	I1007 13:59:50.117124  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.crt.6fa4a90d: {Name:mke3ca2daef4a3a869681f28a1d47582b4f6d4ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:50.145665  816252 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.key.6fa4a90d ...
	I1007 13:59:50.145717  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.key.6fa4a90d: {Name:mk4e31b561fefd861d204c4cbaec5a8eef24b10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:50.145858  816252 certs.go:381] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.crt.6fa4a90d -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.crt
	I1007 13:59:50.145997  816252 certs.go:385] copying /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.key.6fa4a90d -> /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.key
	I1007 13:59:50.146114  816252 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.key
	I1007 13:59:50.146138  816252 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.crt with IP's: []
	I1007 13:59:50.333884  816252 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.crt ...
	I1007 13:59:50.333923  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.crt: {Name:mk8b127e7bdc32fc72f687b6904395ec662eacad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:50.334165  816252 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.key ...
	I1007 13:59:50.334183  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.key: {Name:mk5b589d390ee34a26d0f7e4ada80293e82a921b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:59:50.334407  816252 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem (1338 bytes)
	W1007 13:59:50.334459  816252 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324_empty.pem, impossibly tiny 0 bytes
	I1007 13:59:50.334470  816252 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:59:50.334493  816252 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:59:50.334517  816252 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:59:50.334539  816252 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/certs/key.pem (1675 bytes)
	I1007 13:59:50.334576  816252 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem (1708 bytes)
	I1007 13:59:50.335239  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:59:50.371768  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:59:50.406222  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:59:50.438011  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:59:50.468447  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 13:59:50.499481  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:59:50.527836  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:59:50.556886  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/bridge-221184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:59:50.584934  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/ssl/certs/7543242.pem --> /usr/share/ca-certificates/7543242.pem (1708 bytes)
	I1007 13:59:50.612257  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:59:50.639932  816252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-747025/.minikube/certs/754324.pem --> /usr/share/ca-certificates/754324.pem (1338 bytes)
	I1007 13:59:50.669208  816252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:59:50.688870  816252 ssh_runner.go:195] Run: openssl version
	I1007 13:59:50.696494  816252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:59:50.709213  816252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:59:50.714489  816252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:08 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:59:50.714603  816252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:59:50.721666  816252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:59:50.733909  816252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/754324.pem && ln -fs /usr/share/ca-certificates/754324.pem /etc/ssl/certs/754324.pem"
	I1007 13:59:50.748074  816252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/754324.pem
	I1007 13:59:50.753949  816252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:27 /usr/share/ca-certificates/754324.pem
	I1007 13:59:50.754017  816252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/754324.pem
	I1007 13:59:50.760809  816252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/754324.pem /etc/ssl/certs/51391683.0"
	I1007 13:59:50.773419  816252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7543242.pem && ln -fs /usr/share/ca-certificates/7543242.pem /etc/ssl/certs/7543242.pem"
	I1007 13:59:50.786281  816252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7543242.pem
	I1007 13:59:50.792183  816252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:27 /usr/share/ca-certificates/7543242.pem
	I1007 13:59:50.792262  816252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7543242.pem
	I1007 13:59:50.798827  816252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7543242.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:59:50.811090  816252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:59:50.817087  816252 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:59:50.817165  816252 kubeadm.go:392] StartCluster: {Name:bridge-221184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:bridge-221184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.247 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:59:50.817276  816252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:59:50.817411  816252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:59:50.857033  816252 cri.go:89] found id: ""
	I1007 13:59:50.857127  816252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:59:50.868832  816252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:59:50.879672  816252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:59:50.890580  816252 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:59:50.890604  816252 kubeadm.go:157] found existing configuration files:
	
	I1007 13:59:50.890658  816252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:59:50.900713  816252 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:59:50.900782  816252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:59:50.911778  816252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:59:50.921951  816252 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:59:50.922051  816252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:59:50.933499  816252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:59:50.945529  816252 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:59:50.945593  816252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:59:50.955876  816252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:59:50.965963  816252 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:59:50.966047  816252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:59:50.975939  816252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:59:51.027798  816252 kubeadm.go:310] W1007 13:59:50.979204     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:59:51.028307  816252 kubeadm.go:310] W1007 13:59:50.980091     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:59:51.180313  816252 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:59:55.029227  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 13:59:57.527389  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 14:00:02.764435  816252 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 14:00:02.764510  816252 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 14:00:02.764587  816252 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 14:00:02.764695  816252 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 14:00:02.764834  816252 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 14:00:02.764906  816252 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 14:00:02.766632  816252 out.go:235]   - Generating certificates and keys ...
	I1007 14:00:02.766759  816252 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 14:00:02.766851  816252 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 14:00:02.766942  816252 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 14:00:02.767024  816252 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 14:00:02.767115  816252 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 14:00:02.767193  816252 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 14:00:02.767264  816252 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 14:00:02.767443  816252 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-221184 localhost] and IPs [192.168.72.247 127.0.0.1 ::1]
	I1007 14:00:02.767509  816252 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 14:00:02.767654  816252 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-221184 localhost] and IPs [192.168.72.247 127.0.0.1 ::1]
	I1007 14:00:02.767743  816252 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 14:00:02.767836  816252 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 14:00:02.767900  816252 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 14:00:02.767972  816252 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 14:00:02.768053  816252 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 14:00:02.768141  816252 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 14:00:02.768218  816252 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 14:00:02.768282  816252 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 14:00:02.768337  816252 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 14:00:02.768431  816252 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 14:00:02.768493  816252 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 14:00:02.771314  816252 out.go:235]   - Booting up control plane ...
	I1007 14:00:02.771459  816252 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 14:00:02.771566  816252 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 14:00:02.771656  816252 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 14:00:02.771817  816252 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 14:00:02.771933  816252 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 14:00:02.772001  816252 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 14:00:02.772147  816252 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 14:00:02.772299  816252 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 14:00:02.772356  816252 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.118286ms
	I1007 14:00:02.772438  816252 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 14:00:02.772505  816252 kubeadm.go:310] [api-check] The API server is healthy after 6.001733009s
	I1007 14:00:02.772634  816252 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 14:00:02.772809  816252 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 14:00:02.772868  816252 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 14:00:02.773060  816252 kubeadm.go:310] [mark-control-plane] Marking the node bridge-221184 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 14:00:02.773151  816252 kubeadm.go:310] [bootstrap-token] Using token: 9dyaqf.d7vc4esoawdgoom4
	I1007 14:00:02.774749  816252 out.go:235]   - Configuring RBAC rules ...
	I1007 14:00:02.774868  816252 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 14:00:02.774984  816252 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 14:00:02.775126  816252 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 14:00:02.775301  816252 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 14:00:02.775464  816252 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 14:00:02.775588  816252 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 14:00:02.775725  816252 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 14:00:02.775775  816252 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 14:00:02.775816  816252 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 14:00:02.775822  816252 kubeadm.go:310] 
	I1007 14:00:02.775868  816252 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 14:00:02.775873  816252 kubeadm.go:310] 
	I1007 14:00:02.775965  816252 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 14:00:02.775981  816252 kubeadm.go:310] 
	I1007 14:00:02.776003  816252 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 14:00:02.776056  816252 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 14:00:02.776099  816252 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 14:00:02.776107  816252 kubeadm.go:310] 
	I1007 14:00:02.776156  816252 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 14:00:02.776166  816252 kubeadm.go:310] 
	I1007 14:00:02.776224  816252 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 14:00:02.776233  816252 kubeadm.go:310] 
	I1007 14:00:02.776279  816252 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 14:00:02.776339  816252 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 14:00:02.776392  816252 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 14:00:02.776398  816252 kubeadm.go:310] 
	I1007 14:00:02.776503  816252 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 14:00:02.776595  816252 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 14:00:02.776603  816252 kubeadm.go:310] 
	I1007 14:00:02.776689  816252 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9dyaqf.d7vc4esoawdgoom4 \
	I1007 14:00:02.776804  816252 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa \
	I1007 14:00:02.776843  816252 kubeadm.go:310] 	--control-plane 
	I1007 14:00:02.776852  816252 kubeadm.go:310] 
	I1007 14:00:02.776978  816252 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 14:00:02.776999  816252 kubeadm.go:310] 
	I1007 14:00:02.777071  816252 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9dyaqf.d7vc4esoawdgoom4 \
	I1007 14:00:02.777210  816252 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c52291efb406fa07dfe86177776b56c468c80ee2c6548c2708d46535dd7bf6aa 
	I1007 14:00:02.777232  816252 cni.go:84] Creating CNI manager for "bridge"
	I1007 14:00:02.778939  816252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 14:00:00.027479  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 14:00:02.027641  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 14:00:04.028643  814767 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"False"
	I1007 14:00:02.780326  816252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 14:00:02.797664  816252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 14:00:02.825360  816252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 14:00:02.825523  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:02.825630  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-221184 minikube.k8s.io/updated_at=2024_10_07T14_00_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=bridge-221184 minikube.k8s.io/primary=true
	I1007 14:00:02.861409  816252 ops.go:34] apiserver oom_adj: -16
	I1007 14:00:03.009226  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:03.509955  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:04.009828  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:04.510185  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:04.528559  814767 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:04.528588  814767 pod_ready.go:82] duration metric: took 16.508632867s for pod "coredns-7c65d6cfc9-2z226" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.528599  814767 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.535019  814767 pod_ready.go:93] pod "etcd-flannel-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:04.535048  814767 pod_ready.go:82] duration metric: took 6.441815ms for pod "etcd-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.535066  814767 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.544971  814767 pod_ready.go:93] pod "kube-apiserver-flannel-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:04.545004  814767 pod_ready.go:82] duration metric: took 9.930741ms for pod "kube-apiserver-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.545017  814767 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.551636  814767 pod_ready.go:93] pod "kube-controller-manager-flannel-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:04.551666  814767 pod_ready.go:82] duration metric: took 6.641098ms for pod "kube-controller-manager-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.551678  814767 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-2xw8p" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.561428  814767 pod_ready.go:93] pod "kube-proxy-2xw8p" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:04.561456  814767 pod_ready.go:82] duration metric: took 9.769839ms for pod "kube-proxy-2xw8p" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.561467  814767 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.929901  814767 pod_ready.go:93] pod "kube-scheduler-flannel-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:04.929935  814767 pod_ready.go:82] duration metric: took 368.459855ms for pod "kube-scheduler-flannel-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:04.929951  814767 pod_ready.go:39] duration metric: took 16.918206854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 14:00:04.929975  814767 api_server.go:52] waiting for apiserver process to appear ...
	I1007 14:00:04.930075  814767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 14:00:04.951013  814767 api_server.go:72] duration metric: took 24.055911908s to wait for apiserver process to appear ...
	I1007 14:00:04.951052  814767 api_server.go:88] waiting for apiserver healthz status ...
	I1007 14:00:04.951079  814767 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I1007 14:00:04.958353  814767 api_server.go:279] https://192.168.39.119:8443/healthz returned 200:
	ok
	I1007 14:00:04.959369  814767 api_server.go:141] control plane version: v1.31.1
	I1007 14:00:04.959396  814767 api_server.go:131] duration metric: took 8.335393ms to wait for apiserver health ...
	I1007 14:00:04.959405  814767 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 14:00:05.128675  814767 system_pods.go:59] 7 kube-system pods found
	I1007 14:00:05.128721  814767 system_pods.go:61] "coredns-7c65d6cfc9-2z226" [c57a6f2e-136e-471f-acfd-a7d83241677a] Running
	I1007 14:00:05.128729  814767 system_pods.go:61] "etcd-flannel-221184" [0ae25b79-c7f5-4d60-9f60-9ccd3c8adf65] Running
	I1007 14:00:05.128735  814767 system_pods.go:61] "kube-apiserver-flannel-221184" [9f20ca26-71cd-49a2-89bf-345f682bbec2] Running
	I1007 14:00:05.128741  814767 system_pods.go:61] "kube-controller-manager-flannel-221184" [2cc61ce2-77e7-4bde-a341-b4ddf7467434] Running
	I1007 14:00:05.128748  814767 system_pods.go:61] "kube-proxy-2xw8p" [c4390e04-3e19-419f-9603-3d991e3af1d2] Running
	I1007 14:00:05.128754  814767 system_pods.go:61] "kube-scheduler-flannel-221184" [ea1c7828-aa95-4f88-be27-689335b68ce5] Running
	I1007 14:00:05.128759  814767 system_pods.go:61] "storage-provisioner" [f3a5eb12-a2f8-450d-a5e7-f09ff1a7bd10] Running
	I1007 14:00:05.128770  814767 system_pods.go:74] duration metric: took 169.355627ms to wait for pod list to return data ...
	I1007 14:00:05.128781  814767 default_sa.go:34] waiting for default service account to be created ...
	I1007 14:00:05.325449  814767 default_sa.go:45] found service account: "default"
	I1007 14:00:05.325480  814767 default_sa.go:55] duration metric: took 196.692377ms for default service account to be created ...
	I1007 14:00:05.325491  814767 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 14:00:05.526606  814767 system_pods.go:86] 7 kube-system pods found
	I1007 14:00:05.526641  814767 system_pods.go:89] "coredns-7c65d6cfc9-2z226" [c57a6f2e-136e-471f-acfd-a7d83241677a] Running
	I1007 14:00:05.526649  814767 system_pods.go:89] "etcd-flannel-221184" [0ae25b79-c7f5-4d60-9f60-9ccd3c8adf65] Running
	I1007 14:00:05.526655  814767 system_pods.go:89] "kube-apiserver-flannel-221184" [9f20ca26-71cd-49a2-89bf-345f682bbec2] Running
	I1007 14:00:05.526660  814767 system_pods.go:89] "kube-controller-manager-flannel-221184" [2cc61ce2-77e7-4bde-a341-b4ddf7467434] Running
	I1007 14:00:05.526665  814767 system_pods.go:89] "kube-proxy-2xw8p" [c4390e04-3e19-419f-9603-3d991e3af1d2] Running
	I1007 14:00:05.526672  814767 system_pods.go:89] "kube-scheduler-flannel-221184" [ea1c7828-aa95-4f88-be27-689335b68ce5] Running
	I1007 14:00:05.526677  814767 system_pods.go:89] "storage-provisioner" [f3a5eb12-a2f8-450d-a5e7-f09ff1a7bd10] Running
	I1007 14:00:05.526686  814767 system_pods.go:126] duration metric: took 201.187787ms to wait for k8s-apps to be running ...
	I1007 14:00:05.526697  814767 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 14:00:05.526753  814767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 14:00:05.547082  814767 system_svc.go:56] duration metric: took 20.370349ms WaitForService to wait for kubelet
	I1007 14:00:05.547126  814767 kubeadm.go:582] duration metric: took 24.652028963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 14:00:05.547155  814767 node_conditions.go:102] verifying NodePressure condition ...
	I1007 14:00:05.725539  814767 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 14:00:05.725571  814767 node_conditions.go:123] node cpu capacity is 2
	I1007 14:00:05.725585  814767 node_conditions.go:105] duration metric: took 178.424553ms to run NodePressure ...
	I1007 14:00:05.725599  814767 start.go:241] waiting for startup goroutines ...
	I1007 14:00:05.725609  814767 start.go:246] waiting for cluster config update ...
	I1007 14:00:05.725621  814767 start.go:255] writing updated cluster config ...
	I1007 14:00:05.725938  814767 ssh_runner.go:195] Run: rm -f paused
	I1007 14:00:05.791812  814767 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 14:00:05.794283  814767 out.go:177] * Done! kubectl is now configured to use "flannel-221184" cluster and "default" namespace by default
	I1007 14:00:05.009711  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:05.509378  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:06.010000  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:06.509280  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:07.009401  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:07.509892  816252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 14:00:07.624900  816252 kubeadm.go:1113] duration metric: took 4.799418243s to wait for elevateKubeSystemPrivileges
	I1007 14:00:07.624945  816252 kubeadm.go:394] duration metric: took 16.807785488s to StartCluster
	I1007 14:00:07.624971  816252 settings.go:142] acquiring lock: {Name:mk253b38d4c9251c79faa1a96e2c9e6cb3a54c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 14:00:07.625063  816252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 14:00:07.626905  816252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-747025/kubeconfig: {Name:mk015c56daae0ab65da25a8cde92b4b264178123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 14:00:07.627274  816252 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.247 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 14:00:07.627303  816252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 14:00:07.627522  816252 config.go:182] Loaded profile config "bridge-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 14:00:07.627315  816252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 14:00:07.627607  816252 addons.go:69] Setting storage-provisioner=true in profile "bridge-221184"
	I1007 14:00:07.627679  816252 addons.go:234] Setting addon storage-provisioner=true in "bridge-221184"
	I1007 14:00:07.627745  816252 host.go:66] Checking if "bridge-221184" exists ...
	I1007 14:00:07.627629  816252 addons.go:69] Setting default-storageclass=true in profile "bridge-221184"
	I1007 14:00:07.627867  816252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-221184"
	I1007 14:00:07.628494  816252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 14:00:07.628534  816252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 14:00:07.628568  816252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 14:00:07.628626  816252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 14:00:07.629516  816252 out.go:177] * Verifying Kubernetes components...
	I1007 14:00:07.631850  816252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 14:00:07.648112  816252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I1007 14:00:07.648780  816252 main.go:141] libmachine: () Calling .GetVersion
	I1007 14:00:07.648891  816252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I1007 14:00:07.649487  816252 main.go:141] libmachine: Using API Version  1
	I1007 14:00:07.649517  816252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 14:00:07.649599  816252 main.go:141] libmachine: () Calling .GetVersion
	I1007 14:00:07.649913  816252 main.go:141] libmachine: () Calling .GetMachineName
	I1007 14:00:07.650091  816252 main.go:141] libmachine: Using API Version  1
	I1007 14:00:07.650104  816252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 14:00:07.650915  816252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 14:00:07.650951  816252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 14:00:07.651601  816252 main.go:141] libmachine: () Calling .GetMachineName
	I1007 14:00:07.651909  816252 main.go:141] libmachine: (bridge-221184) Calling .GetState
	I1007 14:00:07.672379  816252 addons.go:234] Setting addon default-storageclass=true in "bridge-221184"
	I1007 14:00:07.672599  816252 host.go:66] Checking if "bridge-221184" exists ...
	I1007 14:00:07.673035  816252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 14:00:07.673119  816252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 14:00:07.673813  816252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36823
	I1007 14:00:07.674476  816252 main.go:141] libmachine: () Calling .GetVersion
	I1007 14:00:07.675093  816252 main.go:141] libmachine: Using API Version  1
	I1007 14:00:07.675113  816252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 14:00:07.675515  816252 main.go:141] libmachine: () Calling .GetMachineName
	I1007 14:00:07.675705  816252 main.go:141] libmachine: (bridge-221184) Calling .GetState
	I1007 14:00:07.677713  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 14:00:07.680247  816252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 14:00:07.682568  816252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 14:00:07.682596  816252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 14:00:07.682625  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 14:00:07.687991  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 14:00:07.688025  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 14:00:07.688043  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 14:00:07.688094  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 14:00:07.690190  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 14:00:07.690497  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 14:00:07.690696  816252 sshutil.go:53] new ssh client: &{IP:192.168.72.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa Username:docker}
	I1007 14:00:07.694637  816252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I1007 14:00:07.695130  816252 main.go:141] libmachine: () Calling .GetVersion
	I1007 14:00:07.695636  816252 main.go:141] libmachine: Using API Version  1
	I1007 14:00:07.695659  816252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 14:00:07.696270  816252 main.go:141] libmachine: () Calling .GetMachineName
	I1007 14:00:07.696826  816252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 14:00:07.696867  816252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 14:00:07.719243  816252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34281
	I1007 14:00:07.719795  816252 main.go:141] libmachine: () Calling .GetVersion
	I1007 14:00:07.720484  816252 main.go:141] libmachine: Using API Version  1
	I1007 14:00:07.720504  816252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 14:00:07.720923  816252 main.go:141] libmachine: () Calling .GetMachineName
	I1007 14:00:07.721239  816252 main.go:141] libmachine: (bridge-221184) Calling .GetState
	I1007 14:00:07.723502  816252 main.go:141] libmachine: (bridge-221184) Calling .DriverName
	I1007 14:00:07.723779  816252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 14:00:07.723802  816252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 14:00:07.723823  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHHostname
	I1007 14:00:07.727533  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 14:00:07.727795  816252 main.go:141] libmachine: (bridge-221184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:01:5f", ip: ""} in network mk-bridge-221184: {Iface:virbr4 ExpiryTime:2024-10-07 14:59:34 +0000 UTC Type:0 Mac:52:54:00:a1:01:5f Iaid: IPaddr:192.168.72.247 Prefix:24 Hostname:bridge-221184 Clientid:01:52:54:00:a1:01:5f}
	I1007 14:00:07.727838  816252 main.go:141] libmachine: (bridge-221184) DBG | domain bridge-221184 has defined IP address 192.168.72.247 and MAC address 52:54:00:a1:01:5f in network mk-bridge-221184
	I1007 14:00:07.728011  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHPort
	I1007 14:00:07.728261  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHKeyPath
	I1007 14:00:07.728431  816252 main.go:141] libmachine: (bridge-221184) Calling .GetSSHUsername
	I1007 14:00:07.728598  816252 sshutil.go:53] new ssh client: &{IP:192.168.72.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/bridge-221184/id_rsa Username:docker}
	I1007 14:00:08.088234  816252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 14:00:08.088293  816252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 14:00:08.124450  816252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 14:00:08.129226  816252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 14:00:08.152587  816252 node_ready.go:35] waiting up to 15m0s for node "bridge-221184" to be "Ready" ...
	I1007 14:00:08.165043  816252 node_ready.go:49] node "bridge-221184" has status "Ready":"True"
	I1007 14:00:08.165089  816252 node_ready.go:38] duration metric: took 12.46155ms for node "bridge-221184" to be "Ready" ...
	I1007 14:00:08.165107  816252 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 14:00:08.179014  816252 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-7qt8t" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:08.765048  816252 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1007 14:00:08.765201  816252 main.go:141] libmachine: Making call to close driver server
	I1007 14:00:08.765232  816252 main.go:141] libmachine: (bridge-221184) Calling .Close
	I1007 14:00:08.765692  816252 main.go:141] libmachine: (bridge-221184) DBG | Closing plugin on server side
	I1007 14:00:08.765719  816252 main.go:141] libmachine: Successfully made call to close driver server
	I1007 14:00:08.765731  816252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 14:00:08.765740  816252 main.go:141] libmachine: Making call to close driver server
	I1007 14:00:08.765748  816252 main.go:141] libmachine: (bridge-221184) Calling .Close
	I1007 14:00:08.766092  816252 main.go:141] libmachine: Successfully made call to close driver server
	I1007 14:00:08.766109  816252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 14:00:08.788628  816252 main.go:141] libmachine: Making call to close driver server
	I1007 14:00:08.788659  816252 main.go:141] libmachine: (bridge-221184) Calling .Close
	I1007 14:00:08.789011  816252 main.go:141] libmachine: Successfully made call to close driver server
	I1007 14:00:08.789031  816252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 14:00:09.048537  816252 main.go:141] libmachine: Making call to close driver server
	I1007 14:00:09.048562  816252 main.go:141] libmachine: (bridge-221184) Calling .Close
	I1007 14:00:09.048935  816252 main.go:141] libmachine: (bridge-221184) DBG | Closing plugin on server side
	I1007 14:00:09.048988  816252 main.go:141] libmachine: Successfully made call to close driver server
	I1007 14:00:09.048995  816252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 14:00:09.049004  816252 main.go:141] libmachine: Making call to close driver server
	I1007 14:00:09.049011  816252 main.go:141] libmachine: (bridge-221184) Calling .Close
	I1007 14:00:09.049325  816252 main.go:141] libmachine: (bridge-221184) DBG | Closing plugin on server side
	I1007 14:00:09.049347  816252 main.go:141] libmachine: Successfully made call to close driver server
	I1007 14:00:09.049369  816252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 14:00:09.052486  816252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1007 14:00:09.054316  816252 addons.go:510] duration metric: took 1.426987094s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1007 14:00:09.270477  816252 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-221184" context rescaled to 1 replicas
	I1007 14:00:10.187157  816252 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qt8t" in "kube-system" namespace has status "Ready":"False"
	I1007 14:00:12.188278  816252 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qt8t" in "kube-system" namespace has status "Ready":"False"
	I1007 14:00:14.687488  816252 pod_ready.go:93] pod "coredns-7c65d6cfc9-7qt8t" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:14.687515  816252 pod_ready.go:82] duration metric: took 6.508467032s for pod "coredns-7c65d6cfc9-7qt8t" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:14.687527  816252 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-vzvnk" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:14.690328  816252 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-vzvnk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vzvnk" not found
	I1007 14:00:14.690369  816252 pod_ready.go:82] duration metric: took 2.826144ms for pod "coredns-7c65d6cfc9-vzvnk" in "kube-system" namespace to be "Ready" ...
	E1007 14:00:14.690384  816252 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-vzvnk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vzvnk" not found
	I1007 14:00:14.690395  816252 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.698134  816252 pod_ready.go:93] pod "etcd-bridge-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:15.698160  816252 pod_ready.go:82] duration metric: took 1.007758096s for pod "etcd-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.698172  816252 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.703511  816252 pod_ready.go:93] pod "kube-apiserver-bridge-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:15.703540  816252 pod_ready.go:82] duration metric: took 5.361188ms for pod "kube-apiserver-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.703551  816252 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.709013  816252 pod_ready.go:93] pod "kube-controller-manager-bridge-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:15.709035  816252 pod_ready.go:82] duration metric: took 5.477595ms for pod "kube-controller-manager-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.709046  816252 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-fr4dj" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.714338  816252 pod_ready.go:93] pod "kube-proxy-fr4dj" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:15.714367  816252 pod_ready.go:82] duration metric: took 5.313111ms for pod "kube-proxy-fr4dj" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:15.714381  816252 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:16.083778  816252 pod_ready.go:93] pod "kube-scheduler-bridge-221184" in "kube-system" namespace has status "Ready":"True"
	I1007 14:00:16.083801  816252 pod_ready.go:82] duration metric: took 369.412068ms for pod "kube-scheduler-bridge-221184" in "kube-system" namespace to be "Ready" ...
	I1007 14:00:16.083812  816252 pod_ready.go:39] duration metric: took 7.918688915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 14:00:16.083837  816252 api_server.go:52] waiting for apiserver process to appear ...
	I1007 14:00:16.083894  816252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 14:00:16.099542  816252 api_server.go:72] duration metric: took 8.47220375s to wait for apiserver process to appear ...
	I1007 14:00:16.099578  816252 api_server.go:88] waiting for apiserver healthz status ...
	I1007 14:00:16.099604  816252 api_server.go:253] Checking apiserver healthz at https://192.168.72.247:8443/healthz ...
	I1007 14:00:16.103956  816252 api_server.go:279] https://192.168.72.247:8443/healthz returned 200:
	ok
	I1007 14:00:16.104871  816252 api_server.go:141] control plane version: v1.31.1
	I1007 14:00:16.104897  816252 api_server.go:131] duration metric: took 5.310074ms to wait for apiserver health ...
	I1007 14:00:16.104908  816252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 14:00:16.286039  816252 system_pods.go:59] 7 kube-system pods found
	I1007 14:00:16.286077  816252 system_pods.go:61] "coredns-7c65d6cfc9-7qt8t" [1f94907a-763c-442e-8f2b-af167ca8bc94] Running
	I1007 14:00:16.286085  816252 system_pods.go:61] "etcd-bridge-221184" [3aa4377f-5cea-4852-9ec1-0e06ddc7234d] Running
	I1007 14:00:16.286090  816252 system_pods.go:61] "kube-apiserver-bridge-221184" [fbf46717-592a-4def-9d9d-567fd31ca401] Running
	I1007 14:00:16.286096  816252 system_pods.go:61] "kube-controller-manager-bridge-221184" [d82f22bf-769c-496c-b4a0-96fdc27ce630] Running
	I1007 14:00:16.286101  816252 system_pods.go:61] "kube-proxy-fr4dj" [533a00c2-4d33-4b65-a04b-59b9a622a6fc] Running
	I1007 14:00:16.286106  816252 system_pods.go:61] "kube-scheduler-bridge-221184" [1d523491-f9fc-4c79-bdf2-cefddd906203] Running
	I1007 14:00:16.286111  816252 system_pods.go:61] "storage-provisioner" [d1269e08-e044-4816-b9a9-c2cdf6e997c4] Running
	I1007 14:00:16.286119  816252 system_pods.go:74] duration metric: took 181.202795ms to wait for pod list to return data ...
	I1007 14:00:16.286129  816252 default_sa.go:34] waiting for default service account to be created ...
	I1007 14:00:16.485264  816252 default_sa.go:45] found service account: "default"
	I1007 14:00:16.485299  816252 default_sa.go:55] duration metric: took 199.162341ms for default service account to be created ...
	I1007 14:00:16.485313  816252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 14:00:16.685256  816252 system_pods.go:86] 7 kube-system pods found
	I1007 14:00:16.685291  816252 system_pods.go:89] "coredns-7c65d6cfc9-7qt8t" [1f94907a-763c-442e-8f2b-af167ca8bc94] Running
	I1007 14:00:16.685299  816252 system_pods.go:89] "etcd-bridge-221184" [3aa4377f-5cea-4852-9ec1-0e06ddc7234d] Running
	I1007 14:00:16.685304  816252 system_pods.go:89] "kube-apiserver-bridge-221184" [fbf46717-592a-4def-9d9d-567fd31ca401] Running
	I1007 14:00:16.685309  816252 system_pods.go:89] "kube-controller-manager-bridge-221184" [d82f22bf-769c-496c-b4a0-96fdc27ce630] Running
	I1007 14:00:16.685314  816252 system_pods.go:89] "kube-proxy-fr4dj" [533a00c2-4d33-4b65-a04b-59b9a622a6fc] Running
	I1007 14:00:16.685319  816252 system_pods.go:89] "kube-scheduler-bridge-221184" [1d523491-f9fc-4c79-bdf2-cefddd906203] Running
	I1007 14:00:16.685323  816252 system_pods.go:89] "storage-provisioner" [d1269e08-e044-4816-b9a9-c2cdf6e997c4] Running
	I1007 14:00:16.685335  816252 system_pods.go:126] duration metric: took 200.012112ms to wait for k8s-apps to be running ...
	I1007 14:00:16.685345  816252 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 14:00:16.685397  816252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 14:00:16.703075  816252 system_svc.go:56] duration metric: took 17.718536ms WaitForService to wait for kubelet
	I1007 14:00:16.703118  816252 kubeadm.go:582] duration metric: took 9.075785073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 14:00:16.703147  816252 node_conditions.go:102] verifying NodePressure condition ...
	I1007 14:00:16.883974  816252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 14:00:16.884007  816252 node_conditions.go:123] node cpu capacity is 2
	I1007 14:00:16.884032  816252 node_conditions.go:105] duration metric: took 180.86905ms to run NodePressure ...
	I1007 14:00:16.884048  816252 start.go:241] waiting for startup goroutines ...
	I1007 14:00:16.884057  816252 start.go:246] waiting for cluster config update ...
	I1007 14:00:16.884071  816252 start.go:255] writing updated cluster config ...
	I1007 14:00:16.884350  816252 ssh_runner.go:195] Run: rm -f paused
	I1007 14:00:16.936308  816252 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 14:00:16.938731  816252 out.go:177] * Done! kubectl is now configured to use "bridge-221184" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.272112476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=379c074b-93b0-40c2-8612-ac718d72f77e name=/runtime.v1.RuntimeService/Version
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.273518420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=226a4610-3fe7-4953-a219-eb4d6fa7f873 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.273928338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309652273901903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=226a4610-3fe7-4953-a219-eb4d6fa7f873 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.274669078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbbda48b-3fdd-4bff-a9f4-28f1384a6f57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.274721711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbbda48b-3fdd-4bff-a9f4-28f1384a6f57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.274913094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbbda48b-3fdd-4bff-a9f4-28f1384a6f57 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.304417457Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b49f6832-5af4-4ed8-8d99-a25cba2a7f26 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.304675295Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d771bdef2ff46c6ddeb4a1a0764ba853f99bbca6763b8c5b2256ecff8258fa85,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-drcg5,Uid:c88368de-954a-484b-8332-a05bfb0b6c9b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308923453636992,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-drcg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88368de-954a-484b-8332-a05bfb0b6c9b,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:43.144835206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:23077570-0411-48e4-9f38-2933
e98132b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308923327900519,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T13:48:43.018984265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mrgdp,Uid:a412fc5b-c29a-403d-87c3-2d0d035890fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308921510644293,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:41.187306913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-szgtd,Uid:579c2478
-e31e-41a7-b18b-749e86c54764,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308921465470416,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:41.154946689Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&PodSandboxMetadata{Name:kube-proxy-jpvx5,Uid:df825f23-4b34-44f3-a641-905c8bdc25ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308921285228903,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T13:48:40.969814796Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-489319,Uid:9f08951ea541525829047ffe90f29a47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308910454335599,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.101:2379,kubernetes.io/config.hash: 9f08951ea541525829047ffe90f29a47,kubernetes.io/config.seen: 2024-10-07T13:48:30.005601205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:487d6489d11c81f7366fb8e953ed9f707e9
86af8c3d162cff86930ddddc2a722,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-489319,Uid:62651fa186d270c62f23f7d307fe1a21,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728308910452536465,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.101:8444,kubernetes.io/config.hash: 62651fa186d270c62f23f7d307fe1a21,kubernetes.io/config.seen: 2024-10-07T13:48:30.005602515Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-489319,Uid:1a78d6497a45d13aff1bdc0c052f5f6d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728308910442293869,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a78d6497a45d13aff1bdc0c052f5f6d,kubernetes.io/config.seen: 2024-10-07T13:48:30.005599810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-489319,Uid:899c94957ea4481f28dea1c0c559d6a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728308910439587569,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 899c94957ea4481f28dea1c0c559d6a8,kubernetes.io/config.seen: 2024-10-07T13:48:30.005596206Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-489319,Uid:62651fa186d270c62f23f7d307fe1a21,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728308621912916703,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.101:8444,kubernetes.io/config.hash: 62651fa186d270c62f23f7d307fe1a21,kubernetes.io/config.s
een: 2024-10-07T13:43:41.424456710Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b49f6832-5af4-4ed8-8d99-a25cba2a7f26 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.305282686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a5412d9-03ac-497a-b923-a670badeb6a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.305348695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a5412d9-03ac-497a-b923-a670badeb6a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.305663323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a5412d9-03ac-497a-b923-a670badeb6a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.320085102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d45fc53-8c7d-4021-b381-677168c220b6 name=/runtime.v1.RuntimeService/Version
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.320167809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d45fc53-8c7d-4021-b381-677168c220b6 name=/runtime.v1.RuntimeService/Version
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.323211902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62a3db85-12cb-4395-a9c9-13c2728ce292 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.323601310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309652323574287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62a3db85-12cb-4395-a9c9-13c2728ce292 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.324094772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94471dea-dd17-44fa-be92-3a3e59316195 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.324297632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94471dea-dd17-44fa-be92-3a3e59316195 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.324514664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94471dea-dd17-44fa-be92-3a3e59316195 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.358979425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6fc4567-af7c-453f-bc07-dff6e119dc8d name=/runtime.v1.RuntimeService/Version
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.359119740Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6fc4567-af7c-453f-bc07-dff6e119dc8d name=/runtime.v1.RuntimeService/Version
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.360253987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20e6d1d9-b298-4d22-aff3-7a92ffda5c3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.360650385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309652360630482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20e6d1d9-b298-4d22-aff3-7a92ffda5c3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.361204024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a866b42a-0ce0-4719-966a-1acae110344a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.361268960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a866b42a-0ce0-4719-966a-1acae110344a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 14:00:52 default-k8s-diff-port-489319 crio[711]: time="2024-10-07 14:00:52.361512885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81,PodSandboxId:4a0fb542274afd74b66677be3b949e71b24fb62bc8db09d96439cb5c2768aeec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728308923512400783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23077570-0411-48e4-9f38-2933e98132b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75,PodSandboxId:39d37a287f8102cb95b8691311cd4e85bcc171bbbd1ec3cd83cfdae2fdfad6c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922682929168,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-szgtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 579c2478-e31e-41a7-b18b-749e86c54764,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7,PodSandboxId:94a04d87f5059636a40bb54bf131f12883ffada67070f87a7a4749784532ed44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728308922313519617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mrgdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a412fc5b-c29a-403d-87c3-2d0d035890fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275,PodSandboxId:ab5c5a8580645a6392963fd7936a6b07449332b05d4de4ed442e7cb4257f729c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728308921661729154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jpvx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df825f23-4b34-44f3-a641-905c8bdc25ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac,PodSandboxId:3b72698db0300727e6c05da95a05ac3963c2ca61004772ba9e60f3f4dad7b3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728308910691800478,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08951ea541525829047ffe90f29a47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2,PodSandboxId:b758eee93396795cd46eea1c83eae2195da3a30e4f23ac452b226a9c373595f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728308910657230419,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 899c94957ea4481f28dea1c0c559d6a8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328,PodSandboxId:85bd497799d85f014e2a51d5e8bd0ad5fa73d42f5b13ddca37e88ab2c18147bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728308910689390160,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78d6497a45d13aff1bdc0c052f5f6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588,PodSandboxId:487d6489d11c81f7366fb8e953ed9f707e986af8c3d162cff86930ddddc2a722,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728308910630414793,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b,PodSandboxId:2ca5d4c120c2c0a33547cbc66c317ca7b390a83e27a56f559a116b2ca3e98f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728308622225339675,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-489319,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62651fa186d270c62f23f7d307fe1a21,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a866b42a-0ce0-4719-966a-1acae110344a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	221460feca963       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   4a0fb542274af       storage-provisioner
	08241c405a16f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 minutes ago      Running             coredns                   0                   39d37a287f810       coredns-7c65d6cfc9-szgtd
	2ca3fa3510acc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 minutes ago      Running             coredns                   0                   94a04d87f5059       coredns-7c65d6cfc9-mrgdp
	327a40c7d2ddc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   12 minutes ago      Running             kube-proxy                0                   ab5c5a8580645       kube-proxy-jpvx5
	bc9755b466e84       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   12 minutes ago      Running             etcd                      2                   3b72698db0300       etcd-default-k8s-diff-port-489319
	4ebb50a700da6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   12 minutes ago      Running             kube-scheduler            2                   85bd497799d85       kube-scheduler-default-k8s-diff-port-489319
	951c910599f12       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   12 minutes ago      Running             kube-controller-manager   2                   b758eee933967       kube-controller-manager-default-k8s-diff-port-489319
	9b5bced8cf581       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   12 minutes ago      Running             kube-apiserver            2                   487d6489d11c8       kube-apiserver-default-k8s-diff-port-489319
	99e283eccd53f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Exited              kube-apiserver            1                   2ca5d4c120c2c       kube-apiserver-default-k8s-diff-port-489319
	
	
	==> coredns [08241c405a16f1680e6db6ff5e689cb88ab950d47d830993140794bcf6e52b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [2ca3fa3510acc735fb8da130b5f76def2ff2cc1323786d41ff4d656b338996a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-489319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-489319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=default-k8s-diff-port-489319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_48_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-489319
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 14:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:58:58 +0000   Mon, 07 Oct 2024 13:48:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:58:58 +0000   Mon, 07 Oct 2024 13:48:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:58:58 +0000   Mon, 07 Oct 2024 13:48:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:58:58 +0000   Mon, 07 Oct 2024 13:48:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.101
	  Hostname:    default-k8s-diff-port-489319
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 322d1f8dd6734fdeb4ccbd498b03009c
	  System UUID:                322d1f8d-d673-4fde-b4cc-bd498b03009c
	  Boot ID:                    9a5d800d-8ecc-4df9-933a-cc537b29b76b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-mrgdp                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-szgtd                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-default-k8s-diff-port-489319                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-489319             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-489319    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jpvx5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-489319             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-6867b74b74-drcg5                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node default-k8s-diff-port-489319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node default-k8s-diff-port-489319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node default-k8s-diff-port-489319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node default-k8s-diff-port-489319 event: Registered Node default-k8s-diff-port-489319 in Controller
	
	
	==> dmesg <==
	[  +0.052554] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042221] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944154] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.748913] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628467] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.355286] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.060842] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063310] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.202201] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.130526] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.325083] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.371883] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.072926] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.048086] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +5.643513] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.123872] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 7 13:48] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.146429] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +4.634340] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.927918] systemd-fstab-generator[2901]: Ignoring "noauto" option for root device
	[  +5.451772] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.080265] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +5.960559] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [bc9755b466e84d4b6d76b6d73172022c1ccd63aeb16b5ca47b6a0cfe55b912ac] <==
	{"level":"warn","ts":"2024-10-07T13:58:44.432153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.829374ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:58:44.432237Z","caller":"traceutil/trace.go:171","msg":"trace[1713816922] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:938; }","duration":"109.932737ms","start":"2024-10-07T13:58:44.322292Z","end":"2024-10-07T13:58:44.432225Z","steps":["trace[1713816922] 'agreement among raft nodes before linearized reading'  (duration: 109.648875ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:58:44.432644Z","caller":"traceutil/trace.go:171","msg":"trace[519181748] transaction","detail":"{read_only:false; response_revision:938; number_of_response:1; }","duration":"113.024177ms","start":"2024-10-07T13:58:44.319611Z","end":"2024-10-07T13:58:44.432635Z","steps":["trace[519181748] 'process raft request'  (duration: 105.94779ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:58:47.372145Z","caller":"traceutil/trace.go:171","msg":"trace[1833225897] transaction","detail":"{read_only:false; response_revision:942; number_of_response:1; }","duration":"127.292096ms","start":"2024-10-07T13:58:47.244839Z","end":"2024-10-07T13:58:47.372131Z","steps":["trace[1833225897] 'process raft request'  (duration: 126.81004ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:59:29.744290Z","caller":"traceutil/trace.go:171","msg":"trace[2052425652] transaction","detail":"{read_only:false; response_revision:976; number_of_response:1; }","duration":"103.498944ms","start":"2024-10-07T13:59:29.640746Z","end":"2024-10-07T13:59:29.744245Z","steps":["trace[2052425652] 'process raft request'  (duration: 103.181662ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:59:50.244482Z","caller":"traceutil/trace.go:171","msg":"trace[2074904788] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"373.934539ms","start":"2024-10-07T13:59:49.870495Z","end":"2024-10-07T13:59:50.244429Z","steps":["trace[2074904788] 'process raft request'  (duration: 373.790472ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:59:50.244885Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:59:49.870476Z","time spent":"374.234712ms","remote":"127.0.0.1:53580","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:991 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-07T13:59:50.511974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.388487ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:59:50.512624Z","caller":"traceutil/trace.go:171","msg":"trace[861093487] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:992; }","duration":"192.069569ms","start":"2024-10-07T13:59:50.320539Z","end":"2024-10-07T13:59:50.512609Z","steps":["trace[861093487] 'range keys from in-memory index tree'  (duration: 191.279014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:59:50.882874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.846607ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14490492773169101984 > lease_revoke:<id:491892673d1bfc45>","response":"size:27"}
	{"level":"info","ts":"2024-10-07T13:59:50.883142Z","caller":"traceutil/trace.go:171","msg":"trace[356704578] linearizableReadLoop","detail":"{readStateIndex:1144; appliedIndex:1143; }","duration":"380.283796ms","start":"2024-10-07T13:59:50.502838Z","end":"2024-10-07T13:59:50.883122Z","steps":["trace[356704578] 'read index received'  (duration: 130.918319ms)","trace[356704578] 'applied index is now lower than readState.Index'  (duration: 249.364102ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:59:50.883276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"380.422495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:59:50.883358Z","caller":"traceutil/trace.go:171","msg":"trace[1549206266] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:992; }","duration":"380.514109ms","start":"2024-10-07T13:59:50.502832Z","end":"2024-10-07T13:59:50.883346Z","steps":["trace[1549206266] 'agreement among raft nodes before linearized reading'  (duration: 380.377129ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:59:50.883405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:59:50.502782Z","time spent":"380.61353ms","remote":"127.0.0.1:53434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-07T13:59:50.887523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.002458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:59:50.888074Z","caller":"traceutil/trace.go:171","msg":"trace[1717649850] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:993; }","duration":"359.489418ms","start":"2024-10-07T13:59:50.528504Z","end":"2024-10-07T13:59:50.887993Z","steps":["trace[1717649850] 'agreement among raft nodes before linearized reading'  (duration: 358.982943ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:59:50.888212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:59:50.528461Z","time spent":"359.73393ms","remote":"127.0.0.1:53590","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-07T13:59:52.471562Z","caller":"traceutil/trace.go:171","msg":"trace[2096198362] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"212.062021ms","start":"2024-10-07T13:59:52.259480Z","end":"2024-10-07T13:59:52.471542Z","steps":["trace[2096198362] 'process raft request'  (duration: 211.794711ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:59:52.490794Z","caller":"traceutil/trace.go:171","msg":"trace[1248875323] linearizableReadLoop","detail":"{readStateIndex:1147; appliedIndex:1146; }","duration":"168.786371ms","start":"2024-10-07T13:59:52.321990Z","end":"2024-10-07T13:59:52.490777Z","steps":["trace[1248875323] 'read index received'  (duration: 150.390787ms)","trace[1248875323] 'applied index is now lower than readState.Index'  (duration: 18.394981ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:59:52.490906Z","caller":"traceutil/trace.go:171","msg":"trace[685393753] transaction","detail":"{read_only:false; response_revision:995; number_of_response:1; }","duration":"228.997953ms","start":"2024-10-07T13:59:52.261899Z","end":"2024-10-07T13:59:52.490897Z","steps":["trace[685393753] 'process raft request'  (duration: 228.742255ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:59:52.491148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.576296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-07T13:59:52.491247Z","caller":"traceutil/trace.go:171","msg":"trace[393372830] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:995; }","duration":"103.691993ms","start":"2024-10-07T13:59:52.387546Z","end":"2024-10-07T13:59:52.491238Z","steps":["trace[393372830] 'agreement among raft nodes before linearized reading'  (duration: 103.546144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:59:52.491263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.267625ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:59:52.491329Z","caller":"traceutil/trace.go:171","msg":"trace[1736107962] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:995; }","duration":"169.333785ms","start":"2024-10-07T13:59:52.321982Z","end":"2024-10-07T13:59:52.491315Z","steps":["trace[1736107962] 'agreement among raft nodes before linearized reading'  (duration: 169.256707ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T14:00:20.913575Z","caller":"traceutil/trace.go:171","msg":"trace[73747184] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"240.999523ms","start":"2024-10-07T14:00:20.672540Z","end":"2024-10-07T14:00:20.913539Z","steps":["trace[73747184] 'process raft request'  (duration: 240.731404ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:00:52 up 17 min,  0 users,  load average: 0.30, 0.28, 0.25
	Linux default-k8s-diff-port-489319 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [99e283eccd53f0778915e2ac82922fa35573c4e71b8d6bd1e7d66fb45114203b] <==
	W1007 13:48:22.875380       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.890184       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.892854       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.898292       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:22.959233       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:23.310768       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:23.313351       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:26.746608       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:26.942815       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.096161       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.101199       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.259618       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.348684       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.392327       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.456343       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.504562       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.603419       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.764338       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.797341       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.819723       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.842503       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.851301       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:27.887234       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:28.022747       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 13:48:28.057190       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9b5bced8cf58178214c793ce1d3e5fc4083b200b4662d96d92ff0315d0501588] <==
	I1007 13:56:34.316842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:56:34.316910       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:58:33.316224       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:58:33.316350       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1007 13:58:34.318568       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:58:34.318648       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1007 13:58:34.318769       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:58:34.318826       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:58:34.319806       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:58:34.319903       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 13:59:34.321360       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:59:34.321459       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1007 13:59:34.322203       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 13:59:34.322490       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 13:59:34.323123       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:59:34.324466       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [951c910599f120d43c3edd017634438dfb830ef0ac7aa4b69aa67a41075425e2] <==
	E1007 13:55:40.349499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:55:40.843390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:56:10.357761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:56:10.853138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:56:40.366305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:56:40.862709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:57:10.374449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:57:10.869984       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:57:40.383849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:57:40.883800       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:58:10.390825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:58:10.894714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:58:40.396634       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:58:40.905438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:58:58.601338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-489319"
	E1007 13:59:10.402671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:59:10.913251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 13:59:40.409974       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 13:59:40.924284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1007 13:59:52.481059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.101516ms"
	I1007 14:00:03.264217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="705.105µs"
	E1007 14:00:10.417839       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 14:00:10.932299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1007 14:00:40.425474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1007 14:00:40.942087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [327a40c7d2ddc4026d4820d7fb98d6e5ec32e08b607441d10a5e425ce65f8275] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 13:48:42.383238       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 13:48:42.493270       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.101"]
	E1007 13:48:42.493386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:48:42.736147       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 13:48:42.736194       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 13:48:42.736222       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:48:42.794249       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:48:42.794569       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:48:42.794601       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:48:42.800907       1 config.go:199] "Starting service config controller"
	I1007 13:48:42.800973       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:48:42.801056       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:48:42.801061       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:48:42.808188       1 config.go:328] "Starting node config controller"
	I1007 13:48:42.808222       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:48:42.901242       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:48:42.901308       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:48:42.910386       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4ebb50a700da676c2d085d6869dc9103bc9139596bf8788d327ca51890ae7328] <==
	W1007 13:48:34.472815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:48:34.472893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.481716       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:48:34.481777       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 13:48:34.494325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:48:34.494566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.551282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:48:34.551344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.551406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:48:34.551445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.563652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:48:34.563740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.619301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 13:48:34.619414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.661492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:48:34.661549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.738473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:48:34.738530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.775243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:48:34.775346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.788281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:48:34.788338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:48:34.850213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:48:34.850266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1007 13:48:36.747924       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:59:37 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:59:37.272874    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:59:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:59:46.531489    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309586530736878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:59:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:59:46.531958    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309586530736878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:59:52 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:59:52.247681    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 13:59:56 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:59:56.534084    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309596533396719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:59:56 default-k8s-diff-port-489319 kubelet[2908]: E1007 13:59:56.534152    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309596533396719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:03 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:03.244857    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 14:00:06 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:06.536622    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309606536252210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:06 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:06.536651    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309606536252210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:16 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:16.538577    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309616537994691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:16 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:16.538622    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309616537994691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:17 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:17.244477    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 14:00:26 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:26.540119    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309626539711961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:26 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:26.540626    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309626539711961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:29 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:29.244196    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:36.276831    2908 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:36.544853    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309636543956152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:36 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:36.544915    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309636543956152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:40 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:40.244505    2908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-drcg5" podUID="c88368de-954a-484b-8332-a05bfb0b6c9b"
	Oct 07 14:00:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:46.546466    2908 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309646545995252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 14:00:46 default-k8s-diff-port-489319 kubelet[2908]: E1007 14:00:46.546558    2908 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728309646545995252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [221460feca963f9611d5ae1bea6bc793c92343d3a9f2601ece9d99bfd6a7ec81] <==
	I1007 13:48:43.615433       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:48:43.630738       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:48:43.631434       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:48:43.667112       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:48:43.667461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489319_4674e1c8-6ac0-4df1-b56e-61cba430c30a!
	I1007 13:48:43.668680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e82ad34b-00ed-407b-b175-8d583bc7e6c6", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-489319_4674e1c8-6ac0-4df1-b56e-61cba430c30a became leader
	I1007 13:48:43.767621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-489319_4674e1c8-6ac0-4df1-b56e-61cba430c30a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-drcg5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 describe pod metrics-server-6867b74b74-drcg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-489319 describe pod metrics-server-6867b74b74-drcg5: exit status 1 (67.008934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-drcg5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-489319 describe pod metrics-server-6867b74b74-drcg5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (179.10s)

                                                
                                    

Test pass (249/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.17
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 3.8
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.15
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 114.07
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 133.48
31 TestAddons/serial/GCPAuth/Namespaces 0.15
34 TestAddons/parallel/Registry 17.13
36 TestAddons/parallel/InspektorGadget 11.3
39 TestAddons/parallel/CSI 37.86
40 TestAddons/parallel/Headlamp 17.8
41 TestAddons/parallel/CloudSpanner 6.62
42 TestAddons/parallel/LocalPath 52.56
43 TestAddons/parallel/NvidiaDevicePlugin 5.53
44 TestAddons/parallel/Yakd 12.04
46 TestCertOptions 44.06
47 TestCertExpiration 472.74
49 TestForceSystemdFlag 60.38
50 TestForceSystemdEnv 43.49
52 TestKVMDriverInstallOrUpdate 1.35
56 TestErrorSpam/setup 43.85
57 TestErrorSpam/start 0.4
58 TestErrorSpam/status 0.79
59 TestErrorSpam/pause 1.72
60 TestErrorSpam/unpause 1.83
61 TestErrorSpam/stop 5.32
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 52.21
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 45.13
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.5
73 TestFunctional/serial/CacheCmd/cache/add_local 1.14
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.12
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
81 TestFunctional/serial/ExtraConfig 44.52
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.59
84 TestFunctional/serial/LogsFileCmd 1.6
85 TestFunctional/serial/InvalidService 3.59
87 TestFunctional/parallel/ConfigCmd 0.39
88 TestFunctional/parallel/DashboardCmd 16.22
89 TestFunctional/parallel/DryRun 0.33
90 TestFunctional/parallel/InternationalLanguage 0.17
91 TestFunctional/parallel/StatusCmd 0.79
95 TestFunctional/parallel/ServiceCmdConnect 56.48
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 79.24
99 TestFunctional/parallel/SSHCmd 0.45
100 TestFunctional/parallel/CpCmd 1.48
101 TestFunctional/parallel/MySQL 24.69
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.44
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
111 TestFunctional/parallel/License 0.18
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 0.86
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.45
118 TestFunctional/parallel/ImageCommands/ImageBuild 6.25
119 TestFunctional/parallel/ImageCommands/Setup 0.46
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 65.23
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
141 TestFunctional/parallel/ProfileCmd/profile_list 0.35
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
143 TestFunctional/parallel/MountCmd/any-port 54.86
144 TestFunctional/parallel/MountCmd/specific-port 1.97
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
146 TestFunctional/parallel/ServiceCmd/List 0.62
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.71
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
149 TestFunctional/parallel/ServiceCmd/Format 0.39
150 TestFunctional/parallel/ServiceCmd/URL 0.46
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 206.14
158 TestMultiControlPlane/serial/DeployApp 6.19
159 TestMultiControlPlane/serial/PingHostFromPods 1.34
160 TestMultiControlPlane/serial/AddWorkerNode 51.06
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
163 TestMultiControlPlane/serial/CopyFile 13.58
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.53
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
172 TestMultiControlPlane/serial/RestartCluster 378.02
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
174 TestMultiControlPlane/serial/AddSecondaryNode 74.11
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
179 TestJSONOutput/start/Command 56.89
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.76
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.67
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.23
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 88.12
211 TestMountStart/serial/StartWithMountFirst 26.64
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 25.44
214 TestMountStart/serial/VerifyMountSecond 0.4
215 TestMountStart/serial/DeleteFirst 0.57
216 TestMountStart/serial/VerifyMountPostDelete 0.41
217 TestMountStart/serial/Stop 1.29
218 TestMountStart/serial/RestartStopped 22.47
219 TestMountStart/serial/VerifyMountPostStop 0.4
222 TestMultiNode/serial/FreshStart2Nodes 111.48
223 TestMultiNode/serial/DeployApp2Nodes 6.86
224 TestMultiNode/serial/PingHostFrom2Pods 0.9
225 TestMultiNode/serial/AddNode 47.21
226 TestMultiNode/serial/MultiNodeLabels 0.07
227 TestMultiNode/serial/ProfileList 0.62
228 TestMultiNode/serial/CopyFile 7.71
229 TestMultiNode/serial/StopNode 2.36
230 TestMultiNode/serial/StartAfterStop 38.48
232 TestMultiNode/serial/DeleteNode 2.25
234 TestMultiNode/serial/RestartMultiNode 199.01
235 TestMultiNode/serial/ValidateNameConflict 43.71
242 TestScheduledStopUnix 114.09
246 TestRunningBinaryUpgrade 218.54
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 94.26
267 TestNetworkPlugins/group/false 3.52
271 TestNoKubernetes/serial/StartWithStopK8s 39.27
273 TestPause/serial/Start 108.92
274 TestNoKubernetes/serial/Start 49.76
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
276 TestNoKubernetes/serial/ProfileList 32.24
277 TestNoKubernetes/serial/Stop 1.32
278 TestNoKubernetes/serial/StartNoArgs 21.45
279 TestStoppedBinaryUpgrade/Setup 0.45
280 TestStoppedBinaryUpgrade/Upgrade 111.74
281 TestPause/serial/SecondStartNoReconfiguration 63.21
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
283 TestPause/serial/Pause 0.77
284 TestPause/serial/VerifyStatus 0.26
285 TestPause/serial/Unpause 0.69
286 TestPause/serial/PauseAgain 0.94
287 TestPause/serial/DeletePaused 0.73
288 TestPause/serial/VerifyDeletedResources 3.66
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
293 TestStartStop/group/no-preload/serial/FirstStart 121.14
295 TestStartStop/group/embed-certs/serial/FirstStart 55.49
296 TestStartStop/group/no-preload/serial/DeployApp 10.35
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
299 TestStartStop/group/embed-certs/serial/DeployApp 10.33
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
306 TestStartStop/group/no-preload/serial/SecondStart 685.11
307 TestStartStop/group/embed-certs/serial/SecondStart 600.02
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 298.89
310 TestStartStop/group/old-k8s-version/serial/Stop 6.32
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 619.27
326 TestStartStop/group/newest-cni/serial/FirstStart 47.78
327 TestNetworkPlugins/group/auto/Start 57.78
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.37
330 TestStartStop/group/newest-cni/serial/Stop 10.56
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
332 TestStartStop/group/newest-cni/serial/SecondStart 36.68
333 TestNetworkPlugins/group/auto/KubeletFlags 0.27
334 TestNetworkPlugins/group/auto/NetCatPod 14.33
335 TestNetworkPlugins/group/auto/DNS 0.22
336 TestNetworkPlugins/group/auto/Localhost 0.15
337 TestNetworkPlugins/group/auto/HairPin 0.16
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
341 TestStartStop/group/newest-cni/serial/Pause 4.66
342 TestNetworkPlugins/group/kindnet/Start 61.85
343 TestNetworkPlugins/group/calico/Start 99.66
344 TestNetworkPlugins/group/custom-flannel/Start 111.87
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
347 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
349 TestNetworkPlugins/group/kindnet/DNS 0.24
350 TestNetworkPlugins/group/kindnet/Localhost 0.16
351 TestNetworkPlugins/group/kindnet/HairPin 0.17
352 TestNetworkPlugins/group/enable-default-cni/Start 88.49
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.24
355 TestNetworkPlugins/group/calico/NetCatPod 11.29
356 TestNetworkPlugins/group/calico/DNS 0.2
357 TestNetworkPlugins/group/calico/Localhost 0.14
358 TestNetworkPlugins/group/calico/HairPin 0.13
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.25
361 TestNetworkPlugins/group/custom-flannel/DNS 0.18
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
364 TestNetworkPlugins/group/flannel/Start 71.37
365 TestNetworkPlugins/group/bridge/Start 67.05
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
373 TestNetworkPlugins/group/flannel/NetCatPod 11.26
374 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
375 TestNetworkPlugins/group/bridge/NetCatPod 11.24
376 TestNetworkPlugins/group/flannel/DNS 0.17
377 TestNetworkPlugins/group/flannel/Localhost 0.13
378 TestNetworkPlugins/group/flannel/HairPin 0.13
379 TestNetworkPlugins/group/bridge/DNS 0.18
380 TestNetworkPlugins/group/bridge/Localhost 0.15
381 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-096310 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-096310 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.166833706s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 12:07:54.012415  754324 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1007 12:07:54.012564  754324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-096310
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-096310: exit status 85 (71.730332ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-096310 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |          |
	|         | -p download-only-096310        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:07:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:07:46.893175  754336 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:07:46.893326  754336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:46.893338  754336 out.go:358] Setting ErrFile to fd 2...
	I1007 12:07:46.893342  754336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:46.893515  754336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	W1007 12:07:46.893657  754336 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18424-747025/.minikube/config/config.json: open /home/jenkins/minikube-integration/18424-747025/.minikube/config/config.json: no such file or directory
	I1007 12:07:46.894415  754336 out.go:352] Setting JSON to true
	I1007 12:07:46.895472  754336 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6616,"bootTime":1728296251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:07:46.895602  754336 start.go:139] virtualization: kvm guest
	I1007 12:07:46.898378  754336 out.go:97] [download-only-096310] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1007 12:07:46.898556  754336 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 12:07:46.898701  754336 notify.go:220] Checking for updates...
	I1007 12:07:46.900261  754336 out.go:169] MINIKUBE_LOCATION=18424
	I1007 12:07:46.901811  754336 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:07:46.903352  754336 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:07:46.904784  754336 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:07:46.906203  754336 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1007 12:07:46.908743  754336 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 12:07:46.909025  754336 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:07:46.942406  754336 out.go:97] Using the kvm2 driver based on user configuration
	I1007 12:07:46.942441  754336 start.go:297] selected driver: kvm2
	I1007 12:07:46.942448  754336 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:07:46.942862  754336 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:07:46.942968  754336 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18424-747025/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:07:46.959773  754336 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:07:46.959845  754336 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:07:46.960430  754336 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1007 12:07:46.960599  754336 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 12:07:46.960639  754336 cni.go:84] Creating CNI manager for ""
	I1007 12:07:46.960700  754336 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:07:46.960709  754336 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 12:07:46.960765  754336 start.go:340] cluster config:
	{Name:download-only-096310 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-096310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:46.960954  754336 iso.go:125] acquiring lock: {Name:mka662c1692705351df7c5ae20f5ba28bcc2df27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:07:46.963270  754336 out.go:97] Downloading VM boot image ...
	I1007 12:07:46.963337  754336 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:07:49.626581  754336 out.go:97] Starting "download-only-096310" primary control-plane node in "download-only-096310" cluster
	I1007 12:07:49.626614  754336 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 12:07:49.646272  754336 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 12:07:49.646334  754336 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:49.646531  754336 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 12:07:49.648559  754336 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 12:07:49.648592  754336 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1007 12:07:49.670374  754336 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-096310 host does not exist
	  To start a cluster, run: "minikube start -p download-only-096310"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-096310
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-478522 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-478522 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.79850532s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 12:07:58.176387  754324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1007 12:07:58.176444  754324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-747025/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-478522
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-478522: exit status 85 (72.842998ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-096310 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | -p download-only-096310        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| delete  | -p download-only-096310        | download-only-096310 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC | 07 Oct 24 12:07 UTC |
	| start   | -o=json --download-only        | download-only-478522 | jenkins | v1.34.0 | 07 Oct 24 12:07 UTC |                     |
	|         | -p download-only-478522        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:07:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:07:54.425159  754539 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:07:54.425318  754539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:54.425331  754539 out.go:358] Setting ErrFile to fd 2...
	I1007 12:07:54.425336  754539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:07:54.425607  754539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:07:54.426291  754539 out.go:352] Setting JSON to true
	I1007 12:07:54.427318  754539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6623,"bootTime":1728296251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:07:54.427443  754539 start.go:139] virtualization: kvm guest
	I1007 12:07:54.429710  754539 out.go:97] [download-only-478522] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:07:54.429937  754539 notify.go:220] Checking for updates...
	I1007 12:07:54.431598  754539 out.go:169] MINIKUBE_LOCATION=18424
	I1007 12:07:54.433476  754539 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:07:54.435008  754539 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:07:54.436798  754539 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:07:54.438347  754539 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-478522 host does not exist
	  To start a cluster, run: "minikube start -p download-only-478522"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-478522
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 12:07:58.830616  754324 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-969518 --alsologtostderr --binary-mirror http://127.0.0.1:40857 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-969518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-969518
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (114.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-484725 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-484725 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m53.152278651s)
helpers_test.go:175: Cleaning up "offline-crio-484725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-484725
--- PASS: TestOffline (114.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-054971
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-054971: exit status 85 (61.933669ms)

                                                
                                                
-- stdout --
	* Profile "addons-054971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-054971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-054971
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-054971: exit status 85 (62.899923ms)

                                                
                                                
-- stdout --
	* Profile "addons-054971" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-054971"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (133.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-054971 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-054971 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.479434961s)
--- PASS: TestAddons/Setup (133.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-054971 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-054971 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.388429ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-77gfb" [256d2114-d21b-4d85-a9d9-a1f7e3e0a43a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004133641s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vjrwk" [bdc2b33d-c287-48c5-a525-9c0e3933f162] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004813175s
addons_test.go:331: (dbg) Run:  kubectl --context addons-054971 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-054971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-054971 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.203059391s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 ip
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xpqzj" [6934dda2-404f-429b-bfb1-4b5cf718e694] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013069399s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable inspektor-gadget --alsologtostderr -v=1: (6.286058981s)
--- PASS: TestAddons/parallel/InspektorGadget (11.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.89912ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-054971 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-054971 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e9bc9308-1459-48c9-8cd6-ab9ee865553d] Pending
helpers_test.go:344: "task-pv-pod" [e9bc9308-1459-48c9-8cd6-ab9ee865553d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e9bc9308-1459-48c9-8cd6-ab9ee865553d] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.008424242s
addons_test.go:511: (dbg) Run:  kubectl --context addons-054971 create -f testdata/csi-hostpath-driver/snapshot.yaml
2024/10/07 12:18:41 [DEBUG] GET http://192.168.39.62:5000
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-054971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-054971 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-054971 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-054971 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-054971 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-054971 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9f1148c8-f2ed-42fc-b77b-b1947043159c] Pending
helpers_test.go:344: "task-pv-pod-restore" [9f1148c8-f2ed-42fc-b77b-b1947043159c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9f1148c8-f2ed-42fc-b77b-b1947043159c] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003904501s
addons_test.go:553: (dbg) Run:  kubectl --context addons-054971 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-054971 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-054971 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable volumesnapshots --alsologtostderr -v=1: (1.023107341s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.006197822s)
--- PASS: TestAddons/parallel/CSI (37.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-054971 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-054971 --alsologtostderr -v=1: (1.000413574s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-pbcbp" [b62cdea8-c260-42a3-a205-70cfc3bb1bf6] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-pbcbp" [b62cdea8-c260-42a3-a205-70cfc3bb1bf6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-pbcbp" [b62cdea8-c260-42a3-a205-70cfc3bb1bf6] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00522577s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable headlamp --alsologtostderr -v=1: (5.796742986s)
--- PASS: TestAddons/parallel/Headlamp (17.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-q6l2f" [4c7e51e5-b608-4061-b6b9-150040984526] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004169514s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-054971 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-054971 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-054971 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c32df3e6-2465-48fe-8381-0326b82722e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c32df3e6-2465-48fe-8381-0326b82722e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c32df3e6-2465-48fe-8381-0326b82722e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003619723s
addons_test.go:901: (dbg) Run:  kubectl --context addons-054971 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 ssh "cat /opt/local-path-provisioner/pvc-47a9c7e0-2559-430c-a3e6-fa07201bf211_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-054971 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-054971 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.538983827s)
--- PASS: TestAddons/parallel/LocalPath (52.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-285h8" [cf2c616e-a6ca-4d0d-8e9b-c62ea66a2246] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00415568s
addons_test.go:961: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-054971
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jz5jd" [a4415e3e-59cd-483b-8e8d-cd2d8592fe4f] Running
I1007 12:18:25.185467  754324 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 12:18:25.185496  754324 kapi.go:107] duration metric: took 7.887292ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005391411s
addons_test.go:973: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-amd64 -p addons-054971 addons disable yakd --alsologtostderr -v=1: (6.037564237s)
--- PASS: TestAddons/parallel/Yakd (12.04s)

                                                
                                    
x
+
TestCertOptions (44.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-079658 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1007 13:24:36.519838  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-079658 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (42.883601011s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-079658 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-079658 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-079658 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-079658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-079658
--- PASS: TestCertOptions (44.06s)

                                                
                                    
x
+
TestCertExpiration (472.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-004876 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-004876 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.98386873s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-004876 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-004876 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (3m44.846584514s)
helpers_test.go:175: Cleaning up "cert-expiration-004876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-004876
--- PASS: TestCertExpiration (472.74s)

                                                
                                    
x
+
TestForceSystemdFlag (60.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-028990 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-028990 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.477487842s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-028990 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-028990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-028990
--- PASS: TestForceSystemdFlag (60.38s)

                                                
                                    
x
+
TestForceSystemdEnv (43.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-914657 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-914657 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.798527408s)
helpers_test.go:175: Cleaning up "force-systemd-env-914657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-914657
--- PASS: TestForceSystemdEnv (43.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.35s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1007 13:19:53.309013  754324 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 13:19:53.309218  754324 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1007 13:19:53.342016  754324 install.go:62] docker-machine-driver-kvm2: exit status 1
W1007 13:19:53.342452  754324 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1007 13:19:53.342532  754324 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2782362769/001/docker-machine-driver-kvm2
E1007 13:19:53.449118  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
I1007 13:19:53.646381  754324 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2782362769/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60] Decompressors:map[bz2:0xc000697cd0 gz:0xc000697cd8 tar:0xc000697c80 tar.bz2:0xc000697c90 tar.gz:0xc000697ca0 tar.xz:0xc000697cb0 tar.zst:0xc000697cc0 tbz2:0xc000697c90 tgz:0xc000697ca0 txz:0xc000697cb0 tzst:0xc000697cc0 xz:0xc000697ce0 zip:0xc000697cf0 zst:0xc000697ce8] Getters:map[file:0xc001dc5d50 http:0xc0004ddc20 https:0xc0004ddc70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1007 13:19:53.646439  754324 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2782362769/001/docker-machine-driver-kvm2
I1007 13:19:54.175556  754324 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 13:19:54.175666  754324 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1007 13:19:54.211339  754324 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1007 13:19:54.211382  754324 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1007 13:19:54.211466  754324 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1007 13:19:54.211500  754324 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2782362769/002/docker-machine-driver-kvm2
I1007 13:19:54.238151  754324 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2782362769/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60] Decompressors:map[bz2:0xc000697cd0 gz:0xc000697cd8 tar:0xc000697c80 tar.bz2:0xc000697c90 tar.gz:0xc000697ca0 tar.xz:0xc000697cb0 tar.zst:0xc000697cc0 tbz2:0xc000697c90 tgz:0xc000697ca0 txz:0xc000697cb0 tzst:0xc000697cc0 xz:0xc000697ce0 zip:0xc000697cf0 zst:0xc000697ce8] Getters:map[file:0xc0001154e0 http:0xc0006c4460 https:0xc0006c44b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1007 13:19:54.238205  754324 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2782362769/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.35s)

                                                
                                    
x
+
TestErrorSpam/setup (43.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-828725 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-828725 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-828725 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-828725 --driver=kvm2  --container-runtime=crio: (43.845046457s)
--- PASS: TestErrorSpam/setup (43.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 stop: (2.403065783s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 stop: (1.223878652s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-828725 --log_dir /tmp/nospam-828725 stop: (1.697031129s)
--- PASS: TestErrorSpam/stop (5.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18424-747025/.minikube/files/etc/test/nested/copy/754324/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-282904 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-282904 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.206110366s)
--- PASS: TestFunctional/serial/StartWithProxy (52.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 12:28:09.011954  754324 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-282904 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-282904 --alsologtostderr -v=8: (45.133119297s)
functional_test.go:663: soft start took 45.13404104s for "functional-282904" cluster.
I1007 12:28:54.145512  754324 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (45.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-282904 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 cache add registry.k8s.io/pause:3.1: (1.166232988s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 cache add registry.k8s.io/pause:3.3: (1.221024742s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 cache add registry.k8s.io/pause:latest: (1.108686262s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-282904 /tmp/TestFunctionalserialCacheCmdcacheadd_local3327439468/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cache add minikube-local-cache-test:functional-282904
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cache delete minikube-local-cache-test:functional-282904
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-282904
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.051111ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 cache reload: (1.026520737s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 kubectl -- --context functional-282904 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-282904 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-282904 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-282904 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.519385122s)
functional_test.go:761: restart took 44.51951734s for "functional-282904" cluster.
I1007 12:29:45.905768  754324 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (44.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-282904 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 logs: (1.590065325s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 logs --file /tmp/TestFunctionalserialLogsFileCmd534625879/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 logs --file /tmp/TestFunctionalserialLogsFileCmd534625879/001/logs.txt: (1.596118822s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-282904 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-282904
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-282904: exit status 115 (299.354215ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.72:32754 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-282904 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 config get cpus: exit status 14 (55.478286ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 config get cpus: exit status 14 (70.515439ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-282904 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-282904 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 766026: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-282904 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-282904 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.781298ms)

                                                
                                                
-- stdout --
	* [functional-282904] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:31:00.270730  765646 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:31:00.270856  765646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:00.270864  765646 out.go:358] Setting ErrFile to fd 2...
	I1007 12:31:00.270869  765646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:00.271073  765646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:31:00.271749  765646 out.go:352] Setting JSON to false
	I1007 12:31:00.272988  765646 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8009,"bootTime":1728296251,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:31:00.273057  765646 start.go:139] virtualization: kvm guest
	I1007 12:31:00.275582  765646 out.go:177] * [functional-282904] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:31:00.277000  765646 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:31:00.277048  765646 notify.go:220] Checking for updates...
	I1007 12:31:00.279622  765646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:31:00.281214  765646 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:31:00.284633  765646 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:00.286093  765646 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:31:00.287552  765646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:31:00.289682  765646 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:00.290351  765646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:00.290428  765646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:00.307582  765646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I1007 12:31:00.308107  765646 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:00.308699  765646 main.go:141] libmachine: Using API Version  1
	I1007 12:31:00.308726  765646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:00.309085  765646 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:00.309243  765646 main.go:141] libmachine: (functional-282904) Calling .DriverName
	I1007 12:31:00.309504  765646 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:31:00.309807  765646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:00.309854  765646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:00.325861  765646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I1007 12:31:00.326362  765646 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:00.326912  765646 main.go:141] libmachine: Using API Version  1
	I1007 12:31:00.326930  765646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:00.327293  765646 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:00.327522  765646 main.go:141] libmachine: (functional-282904) Calling .DriverName
	I1007 12:31:00.364309  765646 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:31:00.366341  765646 start.go:297] selected driver: kvm2
	I1007 12:31:00.366369  765646 start.go:901] validating driver "kvm2" against &{Name:functional-282904 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-282904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:00.366546  765646 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:31:00.369540  765646 out.go:201] 
	W1007 12:31:00.371794  765646 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 12:31:00.373636  765646 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-282904 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-282904 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-282904 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.125218ms)

                                                
                                                
-- stdout --
	* [functional-282904] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:31:00.605717  765724 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:31:00.605833  765724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:00.605840  765724 out.go:358] Setting ErrFile to fd 2...
	I1007 12:31:00.605847  765724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:31:00.606306  765724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 12:31:00.606970  765724 out.go:352] Setting JSON to false
	I1007 12:31:00.608415  765724 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8010,"bootTime":1728296251,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:31:00.608483  765724 start.go:139] virtualization: kvm guest
	I1007 12:31:00.611274  765724 out.go:177] * [functional-282904] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1007 12:31:00.613390  765724 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:31:00.613522  765724 notify.go:220] Checking for updates...
	I1007 12:31:00.617132  765724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:31:00.619180  765724 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 12:31:00.621166  765724 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 12:31:00.622678  765724 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:31:00.624071  765724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:31:00.626348  765724 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:31:00.627030  765724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:00.627115  765724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:00.644297  765724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I1007 12:31:00.644759  765724 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:00.645441  765724 main.go:141] libmachine: Using API Version  1
	I1007 12:31:00.645472  765724 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:00.645836  765724 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:00.646075  765724 main.go:141] libmachine: (functional-282904) Calling .DriverName
	I1007 12:31:00.646376  765724 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:31:00.646837  765724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:31:00.646893  765724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:31:00.664227  765724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I1007 12:31:00.664956  765724 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:31:00.665491  765724 main.go:141] libmachine: Using API Version  1
	I1007 12:31:00.665515  765724 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:31:00.666060  765724 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:31:00.666332  765724 main.go:141] libmachine: (functional-282904) Calling .DriverName
	I1007 12:31:00.702742  765724 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1007 12:31:00.704364  765724 start.go:297] selected driver: kvm2
	I1007 12:31:00.704388  765724 start.go:901] validating driver "kvm2" against &{Name:functional-282904 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-282904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:31:00.704549  765724 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:31:00.706773  765724 out.go:201] 
	W1007 12:31:00.708326  765724 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 12:31:00.710084  765724 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (56.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-282904 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-282904 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9k9dl" [2151cf57-fa26-49e7-86eb-598b2a406efd] Pending
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9k9dl" [2151cf57-fa26-49e7-86eb-598b2a406efd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9k9dl" [2151cf57-fa26-49e7-86eb-598b2a406efd] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 56.004506915s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.72:32382
functional_test.go:1675: http://192.168.39.72:32382: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9k9dl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.72:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.72:32382
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (56.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (79.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9e2cac98-c79f-41ab-935f-ecdf83e747fc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005543635s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-282904 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-282904 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-282904 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-282904 get pvc myclaim -o=json
I1007 12:30:02.532915  754324 retry.go:31] will retry after 2.226045294s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c3de5560-39bc-4dd0-bdcc-f49b3a9a2432 ResourceVersion:645 Generation:0 CreationTimestamp:2024-10-07 12:30:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001578290 VolumeMode:0xc0015782d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-282904 get pvc myclaim -o=json
I1007 12:30:04.819557  754324 retry.go:31] will retry after 5.657459376s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c3de5560-39bc-4dd0-bdcc-f49b3a9a2432 ResourceVersion:645 Generation:0 CreationTimestamp:2024-10-07 12:30:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00186d9a0 VolumeMode:0xc00186d9b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-282904 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-282904 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bf51ba3c-b5a3-4dbf-a6da-a80f7e0b3159] Pending
E1007 12:30:13.698375  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:13.704857  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:13.716312  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:13.737778  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:13.779323  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:13.860865  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:14.022458  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:14.344601  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:14.986610  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:16.268852  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:18.831099  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:23.952865  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:34.194868  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [bf51ba3c-b5a3-4dbf-a6da-a80f7e0b3159] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bf51ba3c-b5a3-4dbf-a6da-a80f7e0b3159] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 45.004441466s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-282904 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-282904 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-282904 delete -f testdata/storage-provisioner/pod.yaml: (3.354071125s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-282904 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [93b5f77e-b6ea-47fe-a0f7-5d80d5cd6217] Pending
helpers_test.go:344: "sp-pod" [93b5f77e-b6ea-47fe-a0f7-5d80d5cd6217] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [93b5f77e-b6ea-47fe-a0f7-5d80d5cd6217] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.008124437s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-282904 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (79.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh -n functional-282904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cp functional-282904:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2817863006/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh -n functional-282904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh -n functional-282904 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-282904 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-4x48p" [ebaae760-7e25-414e-84ec-61350345651f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-4x48p" [ebaae760-7e25-414e-84ec-61350345651f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.005767591s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-282904 exec mysql-6cdb49bbb-4x48p -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-282904 exec mysql-6cdb49bbb-4x48p -- mysql -ppassword -e "show databases;": exit status 1 (359.728164ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1007 12:31:13.918635  754324 retry.go:31] will retry after 1.041758303s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-282904 exec mysql-6cdb49bbb-4x48p -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-282904 exec mysql-6cdb49bbb-4x48p -- mysql -ppassword -e "show databases;": exit status 1 (148.384591ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1007 12:31:15.109821  754324 retry.go:31] will retry after 795.70165ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-282904 exec mysql-6cdb49bbb-4x48p -- mysql -ppassword -e "show databases;"
2024/10/07 12:31:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (24.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/754324/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /etc/test/nested/copy/754324/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/754324.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /etc/ssl/certs/754324.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/754324.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /usr/share/ca-certificates/754324.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7543242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /etc/ssl/certs/7543242.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7543242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /usr/share/ca-certificates/7543242.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-282904 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh "sudo systemctl is-active docker": exit status 1 (225.079474ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh "sudo systemctl is-active containerd": exit status 1 (245.16024ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-282904 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-282904
localhost/kicbase/echo-server:functional-282904
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-282904 image ls --format short --alsologtostderr:
I1007 12:31:02.331281  765925 out.go:345] Setting OutFile to fd 1 ...
I1007 12:31:02.331503  765925 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:02.331520  765925 out.go:358] Setting ErrFile to fd 2...
I1007 12:31:02.331527  765925 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:02.331788  765925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
I1007 12:31:02.332625  765925 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:02.332731  765925 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:02.333322  765925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:02.333425  765925 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:02.350661  765925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
I1007 12:31:02.351236  765925 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:02.351861  765925 main.go:141] libmachine: Using API Version  1
I1007 12:31:02.351886  765925 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:02.352269  765925 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:02.352483  765925 main.go:141] libmachine: (functional-282904) Calling .GetState
I1007 12:31:02.355274  765925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:02.355347  765925 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:02.371581  765925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
I1007 12:31:02.372302  765925 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:02.372824  765925 main.go:141] libmachine: Using API Version  1
I1007 12:31:02.372848  765925 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:02.373318  765925 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:02.373512  765925 main.go:141] libmachine: (functional-282904) Calling .DriverName
I1007 12:31:02.373793  765925 ssh_runner.go:195] Run: systemctl --version
I1007 12:31:02.373829  765925 main.go:141] libmachine: (functional-282904) Calling .GetSSHHostname
I1007 12:31:02.378928  765925 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:02.379552  765925 main.go:141] libmachine: (functional-282904) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:db", ip: ""} in network mk-functional-282904: {Iface:virbr1 ExpiryTime:2024-10-07 13:27:30 +0000 UTC Type:0 Mac:52:54:00:57:fe:db Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-282904 Clientid:01:52:54:00:57:fe:db}
I1007 12:31:02.379599  765925 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined IP address 192.168.39.72 and MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:02.379801  765925 main.go:141] libmachine: (functional-282904) Calling .GetSSHPort
I1007 12:31:02.380154  765925 main.go:141] libmachine: (functional-282904) Calling .GetSSHKeyPath
I1007 12:31:02.380441  765925 main.go:141] libmachine: (functional-282904) Calling .GetSSHUsername
I1007 12:31:02.380708  765925 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/functional-282904/id_rsa Username:docker}
I1007 12:31:02.506493  765925 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 12:31:02.584635  765925 main.go:141] libmachine: Making call to close driver server
I1007 12:31:02.584652  765925 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:02.585155  765925 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
I1007 12:31:02.585211  765925 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:02.585221  765925 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 12:31:02.585235  765925 main.go:141] libmachine: Making call to close driver server
I1007 12:31:02.585246  765925 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:02.585651  765925 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
I1007 12:31:02.585734  765925 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:02.585791  765925 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-282904 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-282904  | 9211d8ee8a5da | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-282904  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-282904  | 2b71d32fbd6a3 | 3.33kB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-282904 image ls --format table --alsologtostderr:
I1007 12:31:09.671334  766114 out.go:345] Setting OutFile to fd 1 ...
I1007 12:31:09.671500  766114 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:09.671512  766114 out.go:358] Setting ErrFile to fd 2...
I1007 12:31:09.671520  766114 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:09.671897  766114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
I1007 12:31:09.673010  766114 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:09.673168  766114 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:09.673832  766114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:09.673902  766114 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:09.689730  766114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
I1007 12:31:09.690320  766114 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:09.690935  766114 main.go:141] libmachine: Using API Version  1
I1007 12:31:09.690964  766114 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:09.691378  766114 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:09.691575  766114 main.go:141] libmachine: (functional-282904) Calling .GetState
I1007 12:31:09.693527  766114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:09.693582  766114 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:09.709761  766114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
I1007 12:31:09.710303  766114 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:09.710935  766114 main.go:141] libmachine: Using API Version  1
I1007 12:31:09.710961  766114 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:09.711330  766114 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:09.711524  766114 main.go:141] libmachine: (functional-282904) Calling .DriverName
I1007 12:31:09.711751  766114 ssh_runner.go:195] Run: systemctl --version
I1007 12:31:09.711778  766114 main.go:141] libmachine: (functional-282904) Calling .GetSSHHostname
I1007 12:31:09.714943  766114 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:09.715366  766114 main.go:141] libmachine: (functional-282904) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:db", ip: ""} in network mk-functional-282904: {Iface:virbr1 ExpiryTime:2024-10-07 13:27:30 +0000 UTC Type:0 Mac:52:54:00:57:fe:db Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-282904 Clientid:01:52:54:00:57:fe:db}
I1007 12:31:09.715406  766114 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined IP address 192.168.39.72 and MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:09.715567  766114 main.go:141] libmachine: (functional-282904) Calling .GetSSHPort
I1007 12:31:09.715786  766114 main.go:141] libmachine: (functional-282904) Calling .GetSSHKeyPath
I1007 12:31:09.715978  766114 main.go:141] libmachine: (functional-282904) Calling .GetSSHUsername
I1007 12:31:09.716221  766114 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/functional-282904/id_rsa Username:docker}
I1007 12:31:09.837943  766114 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 12:31:09.917117  766114 main.go:141] libmachine: Making call to close driver server
I1007 12:31:09.917134  766114 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:09.917437  766114 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:09.917461  766114 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 12:31:09.917478  766114 main.go:141] libmachine: Making call to close driver server
I1007 12:31:09.917486  766114 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:09.917781  766114 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:09.917797  766114 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
I1007 12:31:09.917804  766114 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-282904 image ls --format json --alsologtostderr:
[{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"33dbe6da38f44c3a86caa44c781d266169479512d1cb8344f5a93e06461ea2cd","repoDigests":["docker.io/library/4ccd222d2c8f7adbff7baf239858b4d67994d2b8f4d44b5c3338aad58fb0b606-tmp@sha256:8b08b5fda9abddbcbe5de81eafb1d55c8466565b39fce07a1f705d7d617ba044"],"repoTags":[],"size":"1466018"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"56cc
512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9211d8ee8a5da8c98f4d56fd494130cc635ed5c6c520ccf42424905e167b2d00","repoDigests":["localhost/my-image@sha256:1761f2425e45d90b168a2752ce5c67685e2e18d49e59b9447c8f8d385a7a189d"],"repoTags":["localhost/my-image:functional-282904"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c1
22965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"2b71d32fbd6a305d7433ff0b180832b44a9a7e3a8e88877f3abb1e2817721119","repoDigests":["localhost/minikube-local-cache-test@sha256:0d8895e2ca23269f10224c5b6d83527d6406698c3d03f80b36c41019b17c6c44"],"repoTags":["localhost/minikube-local-cache-test:functional-282904"],"size":"3330"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"]
,"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"
],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-282904"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"60c00
5f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-282904 image ls --format json --alsologtostderr:
I1007 12:31:09.347846  766091 out.go:345] Setting OutFile to fd 1 ...
I1007 12:31:09.347983  766091 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:09.347994  766091 out.go:358] Setting ErrFile to fd 2...
I1007 12:31:09.348000  766091 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:09.348175  766091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
I1007 12:31:09.348835  766091 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:09.348974  766091 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:09.349423  766091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:09.349484  766091 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:09.366180  766091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
I1007 12:31:09.366777  766091 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:09.367399  766091 main.go:141] libmachine: Using API Version  1
I1007 12:31:09.367420  766091 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:09.367792  766091 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:09.367986  766091 main.go:141] libmachine: (functional-282904) Calling .GetState
I1007 12:31:09.369989  766091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:09.370241  766091 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:09.386246  766091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
I1007 12:31:09.386716  766091 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:09.387212  766091 main.go:141] libmachine: Using API Version  1
I1007 12:31:09.387243  766091 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:09.387700  766091 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:09.387913  766091 main.go:141] libmachine: (functional-282904) Calling .DriverName
I1007 12:31:09.388346  766091 ssh_runner.go:195] Run: systemctl --version
I1007 12:31:09.388405  766091 main.go:141] libmachine: (functional-282904) Calling .GetSSHHostname
I1007 12:31:09.392265  766091 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:09.392725  766091 main.go:141] libmachine: (functional-282904) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:db", ip: ""} in network mk-functional-282904: {Iface:virbr1 ExpiryTime:2024-10-07 13:27:30 +0000 UTC Type:0 Mac:52:54:00:57:fe:db Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-282904 Clientid:01:52:54:00:57:fe:db}
I1007 12:31:09.392760  766091 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined IP address 192.168.39.72 and MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:09.392967  766091 main.go:141] libmachine: (functional-282904) Calling .GetSSHPort
I1007 12:31:09.393224  766091 main.go:141] libmachine: (functional-282904) Calling .GetSSHKeyPath
I1007 12:31:09.393528  766091 main.go:141] libmachine: (functional-282904) Calling .GetSSHUsername
I1007 12:31:09.393866  766091 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/functional-282904/id_rsa Username:docker}
I1007 12:31:09.504297  766091 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 12:31:09.602914  766091 main.go:141] libmachine: Making call to close driver server
I1007 12:31:09.602938  766091 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:09.603230  766091 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:09.603250  766091 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 12:31:09.603257  766091 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
I1007 12:31:09.603265  766091 main.go:141] libmachine: Making call to close driver server
I1007 12:31:09.603272  766091 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:09.603545  766091 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
I1007 12:31:09.603591  766091 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:09.603621  766091 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-282904 image ls --format yaml --alsologtostderr:
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 2b71d32fbd6a305d7433ff0b180832b44a9a7e3a8e88877f3abb1e2817721119
repoDigests:
- localhost/minikube-local-cache-test@sha256:0d8895e2ca23269f10224c5b6d83527d6406698c3d03f80b36c41019b17c6c44
repoTags:
- localhost/minikube-local-cache-test:functional-282904
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-282904
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-282904 image ls --format yaml --alsologtostderr:
I1007 12:31:02.652544  765949 out.go:345] Setting OutFile to fd 1 ...
I1007 12:31:02.652699  765949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:02.652711  765949 out.go:358] Setting ErrFile to fd 2...
I1007 12:31:02.652719  765949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:02.652975  765949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
I1007 12:31:02.653716  765949 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:02.653855  765949 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:02.654351  765949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:02.654420  765949 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:02.672144  765949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
I1007 12:31:02.672643  765949 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:02.673566  765949 main.go:141] libmachine: Using API Version  1
I1007 12:31:02.673616  765949 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:02.674144  765949 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:02.674487  765949 main.go:141] libmachine: (functional-282904) Calling .GetState
I1007 12:31:02.677831  765949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:02.677896  765949 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:02.695845  765949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46705
I1007 12:31:02.696514  765949 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:02.697128  765949 main.go:141] libmachine: Using API Version  1
I1007 12:31:02.697157  765949 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:02.697591  765949 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:02.697841  765949 main.go:141] libmachine: (functional-282904) Calling .DriverName
I1007 12:31:02.698131  765949 ssh_runner.go:195] Run: systemctl --version
I1007 12:31:02.698178  765949 main.go:141] libmachine: (functional-282904) Calling .GetSSHHostname
I1007 12:31:02.702405  765949 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:02.702942  765949 main.go:141] libmachine: (functional-282904) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:db", ip: ""} in network mk-functional-282904: {Iface:virbr1 ExpiryTime:2024-10-07 13:27:30 +0000 UTC Type:0 Mac:52:54:00:57:fe:db Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-282904 Clientid:01:52:54:00:57:fe:db}
I1007 12:31:02.702999  765949 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined IP address 192.168.39.72 and MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:02.703207  765949 main.go:141] libmachine: (functional-282904) Calling .GetSSHPort
I1007 12:31:02.703416  765949 main.go:141] libmachine: (functional-282904) Calling .GetSSHKeyPath
I1007 12:31:02.703583  765949 main.go:141] libmachine: (functional-282904) Calling .GetSSHUsername
I1007 12:31:02.703767  765949 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/functional-282904/id_rsa Username:docker}
I1007 12:31:02.892912  765949 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 12:31:03.034087  765949 main.go:141] libmachine: Making call to close driver server
I1007 12:31:03.034109  765949 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:03.034443  765949 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:03.034460  765949 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 12:31:03.034473  765949 main.go:141] libmachine: Making call to close driver server
I1007 12:31:03.034480  765949 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:03.034482  765949 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
I1007 12:31:03.034699  765949 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:03.034713  765949 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh pgrep buildkitd: exit status 1 (300.969353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image build -t localhost/my-image:functional-282904 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 image build -t localhost/my-image:functional-282904 testdata/build --alsologtostderr: (5.628110296s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-282904 image build -t localhost/my-image:functional-282904 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 33dbe6da38f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-282904
--> 9211d8ee8a5
Successfully tagged localhost/my-image:functional-282904
9211d8ee8a5da8c98f4d56fd494130cc635ed5c6c520ccf42424905e167b2d00
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-282904 image build -t localhost/my-image:functional-282904 testdata/build --alsologtostderr:
I1007 12:31:03.394228  766003 out.go:345] Setting OutFile to fd 1 ...
I1007 12:31:03.394571  766003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:03.394594  766003 out.go:358] Setting ErrFile to fd 2...
I1007 12:31:03.394601  766003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:31:03.395009  766003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
I1007 12:31:03.396007  766003 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:03.396657  766003 config.go:182] Loaded profile config "functional-282904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 12:31:03.397058  766003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:03.397116  766003 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:03.415805  766003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
I1007 12:31:03.416783  766003 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:03.417548  766003 main.go:141] libmachine: Using API Version  1
I1007 12:31:03.417585  766003 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:03.418046  766003 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:03.418454  766003 main.go:141] libmachine: (functional-282904) Calling .GetState
I1007 12:31:03.421279  766003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 12:31:03.421347  766003 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 12:31:03.439138  766003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
I1007 12:31:03.439729  766003 main.go:141] libmachine: () Calling .GetVersion
I1007 12:31:03.440333  766003 main.go:141] libmachine: Using API Version  1
I1007 12:31:03.440364  766003 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 12:31:03.440728  766003 main.go:141] libmachine: () Calling .GetMachineName
I1007 12:31:03.441066  766003 main.go:141] libmachine: (functional-282904) Calling .DriverName
I1007 12:31:03.441379  766003 ssh_runner.go:195] Run: systemctl --version
I1007 12:31:03.441416  766003 main.go:141] libmachine: (functional-282904) Calling .GetSSHHostname
I1007 12:31:03.445427  766003 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:03.445907  766003 main.go:141] libmachine: (functional-282904) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:db", ip: ""} in network mk-functional-282904: {Iface:virbr1 ExpiryTime:2024-10-07 13:27:30 +0000 UTC Type:0 Mac:52:54:00:57:fe:db Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-282904 Clientid:01:52:54:00:57:fe:db}
I1007 12:31:03.445982  766003 main.go:141] libmachine: (functional-282904) DBG | domain functional-282904 has defined IP address 192.168.39.72 and MAC address 52:54:00:57:fe:db in network mk-functional-282904
I1007 12:31:03.446127  766003 main.go:141] libmachine: (functional-282904) Calling .GetSSHPort
I1007 12:31:03.446503  766003 main.go:141] libmachine: (functional-282904) Calling .GetSSHKeyPath
I1007 12:31:03.446743  766003 main.go:141] libmachine: (functional-282904) Calling .GetSSHUsername
I1007 12:31:03.446937  766003 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/functional-282904/id_rsa Username:docker}
I1007 12:31:03.605583  766003 build_images.go:161] Building image from path: /tmp/build.3661411.tar
I1007 12:31:03.605677  766003 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 12:31:03.629170  766003 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3661411.tar
I1007 12:31:03.638924  766003 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3661411.tar: stat -c "%s %y" /var/lib/minikube/build/build.3661411.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3661411.tar': No such file or directory
I1007 12:31:03.638965  766003 ssh_runner.go:362] scp /tmp/build.3661411.tar --> /var/lib/minikube/build/build.3661411.tar (3072 bytes)
I1007 12:31:03.715906  766003 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3661411
I1007 12:31:03.754726  766003 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3661411 -xf /var/lib/minikube/build/build.3661411.tar
I1007 12:31:03.778195  766003 crio.go:315] Building image: /var/lib/minikube/build/build.3661411
I1007 12:31:03.778280  766003 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-282904 /var/lib/minikube/build/build.3661411 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1007 12:31:08.920265  766003 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-282904 /var/lib/minikube/build/build.3661411 --cgroup-manager=cgroupfs: (5.141946877s)
I1007 12:31:08.920404  766003 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3661411
I1007 12:31:08.947168  766003 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3661411.tar
I1007 12:31:08.960783  766003 build_images.go:217] Built localhost/my-image:functional-282904 from /tmp/build.3661411.tar
I1007 12:31:08.960826  766003 build_images.go:133] succeeded building to: functional-282904
I1007 12:31:08.960831  766003 build_images.go:134] failed building to: 
I1007 12:31:08.960858  766003 main.go:141] libmachine: Making call to close driver server
I1007 12:31:08.960867  766003 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:08.961222  766003 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:08.961244  766003 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 12:31:08.961254  766003 main.go:141] libmachine: Making call to close driver server
I1007 12:31:08.961263  766003 main.go:141] libmachine: (functional-282904) Calling .Close
I1007 12:31:08.961517  766003 main.go:141] libmachine: Successfully made call to close driver server
I1007 12:31:08.961536  766003 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 12:31:08.961607  766003 main.go:141] libmachine: (functional-282904) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-282904
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (65.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-282904 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-282904 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-kf7kl" [2889ad57-dbae-479f-800e-d6ed11a6a6fd] Pending
helpers_test.go:344: "hello-node-6b9f76b5c7-kf7kl" [2889ad57-dbae-479f-800e-d6ed11a6a6fd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-kf7kl" [2889ad57-dbae-479f-800e-d6ed11a6a6fd] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m5.005103859s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (65.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image load --daemon kicbase/echo-server:functional-282904 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-282904 image load --daemon kicbase/echo-server:functional-282904 --alsologtostderr: (1.282298911s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image load --daemon kicbase/echo-server:functional-282904 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-282904
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image load --daemon kicbase/echo-server:functional-282904 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image save kicbase/echo-server:functional-282904 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image rm kicbase/echo-server:functional-282904 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-282904
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 image save --daemon kicbase/echo-server:functional-282904 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-282904
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
I1007 12:30:00.969364  754324 retry.go:31] will retry after 1.49727217s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c3de5560-39bc-4dd0-bdcc-f49b3a9a2432 ResourceVersion:645 Generation:0 CreationTimestamp:2024-10-07 12:30:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc000289b50 VolumeMode:0xc000289b90 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:1315: Took "295.71144ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.708441ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "280.007261ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "55.395134ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (54.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdany-port3298251704/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728304201596595406" to /tmp/TestFunctionalparallelMountCmdany-port3298251704/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728304201596595406" to /tmp/TestFunctionalparallelMountCmdany-port3298251704/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728304201596595406" to /tmp/TestFunctionalparallelMountCmdany-port3298251704/001/test-1728304201596595406
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.030826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:30:01.813936  754324 retry.go:31] will retry after 633.487883ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 12:30 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 12:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 12:30 test-1728304201596595406
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh cat /mount-9p/test-1728304201596595406
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-282904 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [80d6824e-8478-4801-a559-8f16d5b234f5] Pending
helpers_test.go:344: "busybox-mount" [80d6824e-8478-4801-a559-8f16d5b234f5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [80d6824e-8478-4801-a559-8f16d5b234f5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E1007 12:30:54.676285  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [80d6824e-8478-4801-a559-8f16d5b234f5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 52.003762409s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-282904 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdany-port3298251704/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (54.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdspecific-port4113370151/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.288417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:30:56.688597  754324 retry.go:31] will retry after 596.39001ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdspecific-port4113370151/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh "sudo umount -f /mount-9p": exit status 1 (210.733024ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-282904 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdspecific-port4113370151/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163067319/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163067319/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163067319/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T" /mount1: exit status 1 (335.370951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:30:58.762871  754324 retry.go:31] will retry after 554.48023ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-282904 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163067319/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163067319/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-282904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163067319/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 service list -o json
functional_test.go:1494: Took "714.443508ms" to run "out/minikube-linux-amd64 -p functional-282904 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.72:31879
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-282904 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.72:31879
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-282904
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-282904
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-282904
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-053933 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 12:31:35.639571  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:32:57.561119  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-053933 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.423510408s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-053933 -- rollout status deployment/busybox: (3.884787685s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-cll72 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-fnvw9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-gx88f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-cll72 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-fnvw9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-gx88f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-cll72 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-fnvw9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-gx88f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-cll72 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-cll72 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-fnvw9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-fnvw9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-gx88f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-053933 -- exec busybox-7dff88458-gx88f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-053933 -v=7 --alsologtostderr
E1007 12:34:53.449242  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:53.455741  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:53.467379  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:53.488822  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:53.530317  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:53.611812  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:53.773703  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:54.095298  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:54.737617  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:56.018935  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:34:58.580494  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:35:03.702524  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:35:13.698383  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:35:13.944647  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:35:34.426200  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:35:41.403042  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-053933 -v=7 --alsologtostderr: (50.162679383s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-053933 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp testdata/cp-test.txt ha-053933:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933:/home/docker/cp-test.txt ha-053933-m02:/home/docker/cp-test_ha-053933_ha-053933-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test_ha-053933_ha-053933-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933:/home/docker/cp-test.txt ha-053933-m03:/home/docker/cp-test_ha-053933_ha-053933-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test_ha-053933_ha-053933-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933:/home/docker/cp-test.txt ha-053933-m04:/home/docker/cp-test_ha-053933_ha-053933-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test_ha-053933_ha-053933-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp testdata/cp-test.txt ha-053933-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m02:/home/docker/cp-test.txt ha-053933:/home/docker/cp-test_ha-053933-m02_ha-053933.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test_ha-053933-m02_ha-053933.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m02:/home/docker/cp-test.txt ha-053933-m03:/home/docker/cp-test_ha-053933-m02_ha-053933-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test_ha-053933-m02_ha-053933-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m02:/home/docker/cp-test.txt ha-053933-m04:/home/docker/cp-test_ha-053933-m02_ha-053933-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test_ha-053933-m02_ha-053933-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp testdata/cp-test.txt ha-053933-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt ha-053933:/home/docker/cp-test_ha-053933-m03_ha-053933.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test_ha-053933-m03_ha-053933.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt ha-053933-m02:/home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test_ha-053933-m03_ha-053933-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m03:/home/docker/cp-test.txt ha-053933-m04:/home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test_ha-053933-m03_ha-053933-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp testdata/cp-test.txt ha-053933-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78584503/001/cp-test_ha-053933-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt ha-053933:/home/docker/cp-test_ha-053933-m04_ha-053933.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933 "sudo cat /home/docker/cp-test_ha-053933-m04_ha-053933.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt ha-053933-m02:/home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m02 "sudo cat /home/docker/cp-test_ha-053933-m04_ha-053933-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 cp ha-053933-m04:/home/docker/cp-test.txt ha-053933-m03:/home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 ssh -n ha-053933-m03 "sudo cat /home/docker/cp-test_ha-053933-m04_ha-053933-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 node delete m03 -v=7 --alsologtostderr
E1007 12:44:53.449006  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-053933 node delete m03 -v=7 --alsologtostderr: (15.737773544s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (378.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-053933 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 12:49:53.454089  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:50:13.698130  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:51:16.514069  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-053933 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m17.10498962s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (378.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-053933 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-053933 --control-plane -v=7 --alsologtostderr: (1m13.212383068s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-053933 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-203611 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1007 12:55:13.698391  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-203611 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.887721955s)
--- PASS: TestJSONOutput/start/Command (56.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-203611 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-203611 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-203611 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-203611 --output=json --user=testUser: (7.381502896s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-315944 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-315944 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.529273ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7ef1715-2482-485d-9e0d-5b7f010cc99b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-315944] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fe83f0e-fa44-4014-a6b4-049ae204a5e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"707685ab-5c76-4844-879a-e8b79f24c2a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"24d8b526-1df2-4a87-9673-11a338525125","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig"}}
	{"specversion":"1.0","id":"aaec7dc1-69dc-4fbe-a0ed-ab6edb186b6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube"}}
	{"specversion":"1.0","id":"b6fb1a0e-dc82-459b-b3d1-b5414877390a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ede7d9a1-733a-44c9-8d7a-bfdd40950131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fc28bc9e-147b-430d-a8d2-85959f39e548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-315944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-315944
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-170844 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-170844 --driver=kvm2  --container-runtime=crio: (40.148219242s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-186224 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-186224 --driver=kvm2  --container-runtime=crio: (45.081951841s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-170844
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-186224
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-186224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-186224
helpers_test.go:175: Cleaning up "first-170844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-170844
--- PASS: TestMinikubeProfile (88.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-151762 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-151762 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.637547507s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-151762 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-151762 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-174363 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-174363 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.435854173s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-174363 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-174363 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-151762 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-174363 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-174363 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-174363
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-174363: (1.290089313s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-174363
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-174363: (21.46923108s)
--- PASS: TestMountStart/serial/RestartStopped (22.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-174363 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-174363 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-723069 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 12:59:53.448758  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:13.698490  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-723069 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.042561654s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-723069 -- rollout status deployment/busybox: (5.228626406s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-f928n -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-zndcj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-f928n -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-zndcj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-f928n -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-zndcj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-f928n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-f928n -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-zndcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-723069 -- exec busybox-7dff88458-zndcj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-723069 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-723069 -v 3 --alsologtostderr: (46.585066516s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.21s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-723069 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp testdata/cp-test.txt multinode-723069:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2261398320/001/cp-test_multinode-723069.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069:/home/docker/cp-test.txt multinode-723069-m02:/home/docker/cp-test_multinode-723069_multinode-723069-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m02 "sudo cat /home/docker/cp-test_multinode-723069_multinode-723069-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069:/home/docker/cp-test.txt multinode-723069-m03:/home/docker/cp-test_multinode-723069_multinode-723069-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m03 "sudo cat /home/docker/cp-test_multinode-723069_multinode-723069-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp testdata/cp-test.txt multinode-723069-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2261398320/001/cp-test_multinode-723069-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt multinode-723069:/home/docker/cp-test_multinode-723069-m02_multinode-723069.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069 "sudo cat /home/docker/cp-test_multinode-723069-m02_multinode-723069.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069-m02:/home/docker/cp-test.txt multinode-723069-m03:/home/docker/cp-test_multinode-723069-m02_multinode-723069-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m03 "sudo cat /home/docker/cp-test_multinode-723069-m02_multinode-723069-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp testdata/cp-test.txt multinode-723069-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2261398320/001/cp-test_multinode-723069-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt multinode-723069:/home/docker/cp-test_multinode-723069-m03_multinode-723069.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069 "sudo cat /home/docker/cp-test_multinode-723069-m03_multinode-723069.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 cp multinode-723069-m03:/home/docker/cp-test.txt multinode-723069-m02:/home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 ssh -n multinode-723069-m02 "sudo cat /home/docker/cp-test_multinode-723069-m03_multinode-723069-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 node stop m03: (1.452367983s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-723069 status: exit status 7 (449.871069ms)

                                                
                                                
-- stdout --
	multinode-723069
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-723069-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-723069-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr: exit status 7 (455.718012ms)

                                                
                                                
-- stdout --
	multinode-723069
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-723069-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-723069-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:01:47.318454  782819 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:01:47.318579  782819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:01:47.318589  782819 out.go:358] Setting ErrFile to fd 2...
	I1007 13:01:47.318594  782819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:01:47.318780  782819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:01:47.318971  782819 out.go:352] Setting JSON to false
	I1007 13:01:47.319013  782819 mustload.go:65] Loading cluster: multinode-723069
	I1007 13:01:47.319084  782819 notify.go:220] Checking for updates...
	I1007 13:01:47.319587  782819 config.go:182] Loaded profile config "multinode-723069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:01:47.319619  782819 status.go:174] checking status of multinode-723069 ...
	I1007 13:01:47.320096  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.320149  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.341272  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I1007 13:01:47.341910  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.342546  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.342572  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.342931  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.343133  782819 main.go:141] libmachine: (multinode-723069) Calling .GetState
	I1007 13:01:47.344928  782819 status.go:371] multinode-723069 host status = "Running" (err=<nil>)
	I1007 13:01:47.344953  782819 host.go:66] Checking if "multinode-723069" exists ...
	I1007 13:01:47.345288  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.345335  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.361034  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37333
	I1007 13:01:47.361458  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.361921  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.361940  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.362234  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.362420  782819 main.go:141] libmachine: (multinode-723069) Calling .GetIP
	I1007 13:01:47.365081  782819 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:01:47.365526  782819 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:01:47.365553  782819 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:01:47.365705  782819 host.go:66] Checking if "multinode-723069" exists ...
	I1007 13:01:47.366133  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.366180  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.381761  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I1007 13:01:47.382319  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.382820  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.382842  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.383178  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.383333  782819 main.go:141] libmachine: (multinode-723069) Calling .DriverName
	I1007 13:01:47.383518  782819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:01:47.383548  782819 main.go:141] libmachine: (multinode-723069) Calling .GetSSHHostname
	I1007 13:01:47.386305  782819 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:01:47.386753  782819 main.go:141] libmachine: (multinode-723069) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:67:43", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 13:59:04 +0000 UTC Type:0 Mac:52:54:00:b9:67:43 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-723069 Clientid:01:52:54:00:b9:67:43}
	I1007 13:01:47.386795  782819 main.go:141] libmachine: (multinode-723069) DBG | domain multinode-723069 has defined IP address 192.168.39.213 and MAC address 52:54:00:b9:67:43 in network mk-multinode-723069
	I1007 13:01:47.386936  782819 main.go:141] libmachine: (multinode-723069) Calling .GetSSHPort
	I1007 13:01:47.387151  782819 main.go:141] libmachine: (multinode-723069) Calling .GetSSHKeyPath
	I1007 13:01:47.387306  782819 main.go:141] libmachine: (multinode-723069) Calling .GetSSHUsername
	I1007 13:01:47.387520  782819 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069/id_rsa Username:docker}
	I1007 13:01:47.470177  782819 ssh_runner.go:195] Run: systemctl --version
	I1007 13:01:47.476595  782819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:01:47.493008  782819 kubeconfig.go:125] found "multinode-723069" server: "https://192.168.39.213:8443"
	I1007 13:01:47.493051  782819 api_server.go:166] Checking apiserver status ...
	I1007 13:01:47.493085  782819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:01:47.509175  782819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup
	W1007 13:01:47.521049  782819 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:01:47.521124  782819 ssh_runner.go:195] Run: ls
	I1007 13:01:47.531772  782819 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I1007 13:01:47.536485  782819 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I1007 13:01:47.536521  782819 status.go:463] multinode-723069 apiserver status = Running (err=<nil>)
	I1007 13:01:47.536534  782819 status.go:176] multinode-723069 status: &{Name:multinode-723069 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:01:47.536560  782819 status.go:174] checking status of multinode-723069-m02 ...
	I1007 13:01:47.536890  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.536946  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.553121  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I1007 13:01:47.553582  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.554142  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.554165  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.554501  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.554687  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .GetState
	I1007 13:01:47.556045  782819 status.go:371] multinode-723069-m02 host status = "Running" (err=<nil>)
	I1007 13:01:47.556065  782819 host.go:66] Checking if "multinode-723069-m02" exists ...
	I1007 13:01:47.556443  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.556500  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.572521  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1007 13:01:47.572960  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.573448  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.573470  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.573851  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.574055  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .GetIP
	I1007 13:01:47.576735  782819 main.go:141] libmachine: (multinode-723069-m02) DBG | domain multinode-723069-m02 has defined MAC address 52:54:00:b6:65:04 in network mk-multinode-723069
	I1007 13:01:47.577103  782819 main.go:141] libmachine: (multinode-723069-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:65:04", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 14:00:03 +0000 UTC Type:0 Mac:52:54:00:b6:65:04 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-723069-m02 Clientid:01:52:54:00:b6:65:04}
	I1007 13:01:47.577139  782819 main.go:141] libmachine: (multinode-723069-m02) DBG | domain multinode-723069-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:b6:65:04 in network mk-multinode-723069
	I1007 13:01:47.577247  782819 host.go:66] Checking if "multinode-723069-m02" exists ...
	I1007 13:01:47.577579  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.577634  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.593566  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I1007 13:01:47.594071  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.594618  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.594641  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.594992  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.595187  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .DriverName
	I1007 13:01:47.595399  782819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:01:47.595422  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .GetSSHHostname
	I1007 13:01:47.598635  782819 main.go:141] libmachine: (multinode-723069-m02) DBG | domain multinode-723069-m02 has defined MAC address 52:54:00:b6:65:04 in network mk-multinode-723069
	I1007 13:01:47.599008  782819 main.go:141] libmachine: (multinode-723069-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:65:04", ip: ""} in network mk-multinode-723069: {Iface:virbr1 ExpiryTime:2024-10-07 14:00:03 +0000 UTC Type:0 Mac:52:54:00:b6:65:04 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-723069-m02 Clientid:01:52:54:00:b6:65:04}
	I1007 13:01:47.599033  782819 main.go:141] libmachine: (multinode-723069-m02) DBG | domain multinode-723069-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:b6:65:04 in network mk-multinode-723069
	I1007 13:01:47.599247  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .GetSSHPort
	I1007 13:01:47.599458  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .GetSSHKeyPath
	I1007 13:01:47.599630  782819 main.go:141] libmachine: (multinode-723069-m02) Calling .GetSSHUsername
	I1007 13:01:47.599833  782819 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18424-747025/.minikube/machines/multinode-723069-m02/id_rsa Username:docker}
	I1007 13:01:47.685575  782819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:01:47.700662  782819 status.go:176] multinode-723069-m02 status: &{Name:multinode-723069-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:01:47.700730  782819 status.go:174] checking status of multinode-723069-m03 ...
	I1007 13:01:47.701113  782819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:01:47.701173  782819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:01:47.717507  782819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I1007 13:01:47.718175  782819 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:01:47.718762  782819 main.go:141] libmachine: Using API Version  1
	I1007 13:01:47.718782  782819 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:01:47.719112  782819 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:01:47.719282  782819 main.go:141] libmachine: (multinode-723069-m03) Calling .GetState
	I1007 13:01:47.721780  782819 status.go:371] multinode-723069-m03 host status = "Stopped" (err=<nil>)
	I1007 13:01:47.721801  782819 status.go:384] host is not running, skipping remaining checks
	I1007 13:01:47.721808  782819 status.go:176] multinode-723069-m03 status: &{Name:multinode-723069-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 node start m03 -v=7 --alsologtostderr: (37.818556283s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-723069 node delete m03: (1.653768999s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (199.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-723069 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 13:10:13.698327  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-723069 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.428708319s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-723069 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (199.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-723069
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-723069-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-723069-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (74.561001ms)

                                                
                                                
-- stdout --
	* [multinode-723069-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-723069-m02' is duplicated with machine name 'multinode-723069-m02' in profile 'multinode-723069'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-723069-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-723069-m03 --driver=kvm2  --container-runtime=crio: (42.681737377s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-723069
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-723069: exit status 80 (221.993149ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-723069 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-723069-m03 already exists in multinode-723069-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-723069-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.71s)

                                                
                                    
x
+
TestScheduledStopUnix (114.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-128662 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-128662 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.350047305s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-128662 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-128662 -n scheduled-stop-128662
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-128662 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 13:17:47.389071  754324 retry.go:31] will retry after 114.1µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.390197  754324 retry.go:31] will retry after 103.899µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.391367  754324 retry.go:31] will retry after 284.799µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.392518  754324 retry.go:31] will retry after 422.739µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.393677  754324 retry.go:31] will retry after 367.138µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.394837  754324 retry.go:31] will retry after 659.969µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.395986  754324 retry.go:31] will retry after 908.851µs: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.397135  754324 retry.go:31] will retry after 2.296468ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.400378  754324 retry.go:31] will retry after 1.316281ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.402632  754324 retry.go:31] will retry after 5.404752ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.408949  754324 retry.go:31] will retry after 6.891039ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.416296  754324 retry.go:31] will retry after 12.853791ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.429592  754324 retry.go:31] will retry after 15.416771ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.445920  754324 retry.go:31] will retry after 24.559017ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
I1007 13:17:47.471261  754324 retry.go:31] will retry after 32.374664ms: open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/scheduled-stop-128662/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-128662 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-128662 -n scheduled-stop-128662
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-128662
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-128662 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-128662
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-128662: exit status 7 (77.692629ms)

                                                
                                                
-- stdout --
	scheduled-stop-128662
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-128662 -n scheduled-stop-128662
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-128662 -n scheduled-stop-128662: exit status 7 (67.60623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-128662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-128662
--- PASS: TestScheduledStopUnix (114.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (218.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4087506391 start -p running-upgrade-533449 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4087506391 start -p running-upgrade-533449 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.302308936s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-533449 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-533449 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.613096846s)
helpers_test.go:175: Cleaning up "running-upgrade-533449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-533449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-533449: (1.080052135s)
--- PASS: TestRunningBinaryUpgrade (218.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-499494 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-499494 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (93.685945ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-499494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-499494 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-499494 --driver=kvm2  --container-runtime=crio: (1m33.99217172s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-499494 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-221184 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-221184 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (116.183407ms)

                                                
                                                
-- stdout --
	* [false-221184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:19:45.348740  791238 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:19:45.349004  791238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:19:45.349023  791238 out.go:358] Setting ErrFile to fd 2...
	I1007 13:19:45.349029  791238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:19:45.349257  791238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-747025/.minikube/bin
	I1007 13:19:45.349872  791238 out.go:352] Setting JSON to false
	I1007 13:19:45.350873  791238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10934,"bootTime":1728296251,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:19:45.350975  791238 start.go:139] virtualization: kvm guest
	I1007 13:19:45.354447  791238 out.go:177] * [false-221184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:19:45.355954  791238 notify.go:220] Checking for updates...
	I1007 13:19:45.355962  791238 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:19:45.357659  791238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:19:45.359317  791238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-747025/kubeconfig
	I1007 13:19:45.360615  791238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-747025/.minikube
	I1007 13:19:45.361807  791238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:19:45.363080  791238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:19:45.364829  791238 config.go:182] Loaded profile config "NoKubernetes-499494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:19:45.364948  791238 config.go:182] Loaded profile config "offline-crio-484725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:19:45.365019  791238 config.go:182] Loaded profile config "running-upgrade-533449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1007 13:19:45.365113  791238 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:19:45.406949  791238 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:19:45.408281  791238 start.go:297] selected driver: kvm2
	I1007 13:19:45.408300  791238 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:19:45.408314  791238 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:19:45.410320  791238 out.go:201] 
	W1007 13:19:45.411984  791238 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1007 13:19:45.413352  791238 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-221184 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-221184" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-221184

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-221184"

                                                
                                                
----------------------- debugLogs end: false-221184 [took: 3.240783682s] --------------------------------
helpers_test.go:175: Cleaning up "false-221184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-221184
--- PASS: TestNetworkPlugins/group/false (3.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-499494 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-499494 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.065452637s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-499494 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-499494 status -o json: exit status 2 (270.940647ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-499494","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-499494
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.27s)

                                                
                                    
x
+
TestPause/serial/Start (108.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-011126 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-011126 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.922962949s)
--- PASS: TestPause/serial/Start (108.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-499494 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-499494 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.762681437s)
--- PASS: TestNoKubernetes/serial/Start (49.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-499494 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-499494 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.733748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.484626948s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.754663624s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-499494
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-499494: (1.323095011s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-499494 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-499494 --driver=kvm2  --container-runtime=crio: (21.447351972s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1084172400 start -p stopped-upgrade-193275 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1084172400 start -p stopped-upgrade-193275 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m4.15720603s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1084172400 -p stopped-upgrade-193275 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1084172400 -p stopped-upgrade-193275 stop: (1.335124551s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-193275 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-193275 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.244197981s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-011126 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-011126 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.179000218s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (63.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-499494 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-499494 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.120735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-011126 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-011126 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-011126 --output=json --layout=cluster: exit status 2 (256.370039ms)

                                                
                                                
-- stdout --
	{"Name":"pause-011126","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-011126","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-011126 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-011126 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-011126 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.656600717s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-193275
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (121.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-016701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-016701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (2m1.13996157s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (121.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-653322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-653322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (55.494066019s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-016701 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd92f63e-0c20-4803-b0a6-77d2e58ecf44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd92f63e-0c20-4803-b0a6-77d2e58ecf44] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004711123s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-016701 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014659445s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-016701 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-653322 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ae5f6934-42c5-4338-8ee2-d9d42e1fc373] Pending
helpers_test.go:344: "busybox" [ae5f6934-42c5-4338-8ee2-d9d42e1fc373] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ae5f6934-42c5-4338-8ee2-d9d42e1fc373] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004691604s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-653322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-653322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-653322 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (685.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-016701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-016701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m24.837924469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016701 -n no-preload-016701
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (685.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (600.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-653322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:30:13.698707  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-653322 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m59.760333258s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-653322 -n embed-certs-653322
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (600.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (298.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-489319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-489319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (4m58.887305032s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (298.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-120978 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-120978 --alsologtostderr -v=3: (6.322425295s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-120978 -n old-k8s-version-120978: exit status 7 (71.454972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-120978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [06a51b2d-2603-4b0e-8730-e3365884087f] Pending
helpers_test.go:344: "busybox" [06a51b2d-2603-4b0e-8730-e3365884087f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [06a51b2d-2603-4b0e-8730-e3365884087f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005617876s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-489319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-489319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032597278s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-489319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (619.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-489319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:39:53.449116  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-489319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m18.990588895s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-489319 -n default-k8s-diff-port-489319
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (619.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-006310 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 13:55:13.698253  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-006310 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.77953031s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (57.782981874s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-006310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-006310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.366155231s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-006310 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-006310 --alsologtostderr -v=3: (10.559604933s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006310 -n newest-cni-006310
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006310 -n newest-cni-006310: exit status 7 (83.851871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-006310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-006310 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-006310 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.242914098s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006310 -n newest-cni-006310
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-221184 "pgrep -a kubelet"
I1007 13:56:16.955372  754324 config.go:182] Loaded profile config "auto-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-221184 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z8pwl" [bd1482fc-b42f-4898-abc8-5c6aec174eaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z8pwl" [bd1482fc-b42f-4898-abc8-5c6aec174eaf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.00467977s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-006310 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-006310 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-006310 --alsologtostderr -v=1: (1.808476442s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006310 -n newest-cni-006310
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006310 -n newest-cni-006310: exit status 2 (362.301685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-006310 -n newest-cni-006310
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-006310 -n newest-cni-006310: exit status 2 (396.429815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-006310 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006310 -n newest-cni-006310
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-006310 -n newest-cni-006310
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m1.847169212s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m39.662886345s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (111.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1007 13:57:16.867395  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:16.873924  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:16.885471  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:16.907001  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:16.948538  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:17.030114  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:17.191748  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:17.513851  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:18.155235  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:19.437109  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:21.999436  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:27.121225  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:37.363008  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/no-preload-016701/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m51.87167949s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (111.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nzzt6" [2a49bb3b-6601-4516-900d-f3138bc783bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004033529s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-221184 "pgrep -a kubelet"
I1007 13:57:45.254914  754324 config.go:182] Loaded profile config "kindnet-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-221184 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4mgkv" [a369cef2-1287-45e5-ab16-569c289a2377] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4mgkv" [a369cef2-1287-45e5-ab16-569c289a2377] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007544578s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.486615651s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tlbgs" [6856ad4e-1e02-4f62-90ee-59c0c4a3691f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00635046s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-221184 "pgrep -a kubelet"
I1007 13:58:24.355120  754324 config.go:182] Loaded profile config "calico-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-221184 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xgnxg" [d447b307-baf0-44cc-b30d-948b4699fb81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xgnxg" [d447b307-baf0-44cc-b30d-948b4699fb81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004699625s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-221184 "pgrep -a kubelet"
I1007 13:58:39.816449  754324 config.go:182] Loaded profile config "custom-flannel-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-221184 replace --force -f testdata/netcat-deployment.yaml
I1007 13:58:40.057312  754324 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lmxkq" [3b133499-6350-41e6-800e-6c054440073d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lmxkq" [3b133499-6350-41e6-800e-6c054440073d] Running
E1007 13:58:47.995274  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.001834  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.013381  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.034936  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.076245  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.158074  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.319688  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:48.641871  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:49.283835  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:58:50.565698  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004961158s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1007 13:58:58.250106  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:59:08.491549  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m11.370468294s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1007 13:59:28.973142  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-221184 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m7.054530621s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-221184 "pgrep -a kubelet"
I1007 13:59:42.169935  754324 config.go:182] Loaded profile config "enable-default-cni-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-221184 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nwjlj" [4d0bafd4-96a6-4353-aaef-621aef3c6e94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nwjlj" [4d0bafd4-96a6-4353-aaef-621aef3c6e94] Running
E1007 13:59:53.448779  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/functional-282904/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004055178s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t6spp" [c8282f7a-16ed-4636-bbb6-21c25cdd2cea] Running
E1007 14:00:09.935280  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/old-k8s-version-120978/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005103545s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-221184 "pgrep -a kubelet"
I1007 14:00:12.044977  754324 config.go:182] Loaded profile config "flannel-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-221184 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8zv68" [70d24b10-357d-4e54-9bd9-e10b3d0bd5ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 14:00:13.699048  754324 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-747025/.minikube/profiles/addons-054971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-8zv68" [70d24b10-357d-4e54-9bd9-e10b3d0bd5ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004987887s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-221184 "pgrep -a kubelet"
I1007 14:00:17.186356  754324 config.go:182] Loaded profile config "bridge-221184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-221184 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qwclp" [2f99d7e9-ad52-4eea-a9ba-01608844bf5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qwclp" [2f99d7e9-ad52-4eea-a9ba-01608844bf5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004807824s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-221184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-221184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.16
262 TestNetworkPlugins/group/kubenet 3.06
270 TestNetworkPlugins/group/cilium 4.48
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-054971 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-288417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-288417
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-221184 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-221184" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-221184

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-221184"

                                                
                                                
----------------------- debugLogs end: kubenet-221184 [took: 2.910700447s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-221184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-221184
--- SKIP: TestNetworkPlugins/group/kubenet (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-221184 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-221184" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-221184

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-221184" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-221184"

                                                
                                                
----------------------- debugLogs end: cilium-221184 [took: 4.07916521s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-221184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-221184
--- SKIP: TestNetworkPlugins/group/cilium (4.48s)

                                                
                                    
Copied to clipboard